QuantFactory/Replete-Coder-Instruct-8b-Merged-GGUF

This is quantized version of Replete-AI/Replete-Coder-Instruct-8b-Merged created using llama.cpp

Model Description

This is a Ties merge between the following models:

The Coding, and Overall performance of this models seems to be better than both base models used in the merge. Benchmarks are coming in the future.

Downloads last month
41
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support