Edge-Quant/aquif-3.6-8B-Q4_K_M-GGUF

This model was converted to GGUF format from aquif-ai/aquif-3.6-8B using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

aquif-3.6-8B

Summary

aquif-3.6-8B is a hybrid reasoning model that automatically determines when and how deeply to think based on query complexity. Built on aquif-3.5-8B-Think with AutoThink RL data, it achieves 28% better token efficiency and 4% performance improvement across benchmarks.

Contents

Automatic Thinking

aquif-3.6-8B is a hybrid reasoning model that dynamically decides if and how much to think based on query complexity. Inspired by KAT-V1's approach of automatic thinking using AutoThink RL data on top of aquif-3.5-8B-Think, the model uses the following format:

<judge>
[analyzes whether to think or not]
</judge>

<think_on/off>
<think>
[thinking content]
</think>

<answer>
</answer>

This is the same format as KAT-V1-40B. Unlike something like DeepSeek-V3.1's toggleable reasoning that requires manual control (thinking_on/off), aquif-3.6's judge autonomously allocates reasoning depth - intelligently adapting its cognitive effort to each task automatically.

Key Features

  • ๐Ÿง  Dynamic Reasoning: Automatically determines when and how deeply to think
  • โšก 28% More Efficient: Significant token reduction while improving performance
  • ๐Ÿ“ˆ Better Performance: 4% average improvement across benchmarks
  • ๐ŸŽฏ Smart Resource Allocation: 12% reduction in thinking ratio on average

Performance

Chart 1
Benchmark aquif-3.6-8B aquif-3.5-8B Improvement
AIME 2025 82.5 81.4 +1%
LiveCodeBench 64.2 61.5 +4%
GPQA Diamond 71.0 66.8 +6%
Average 72.6 69.9 +4%

Token Efficiency

Chart 2
Benchmark aquif-3.6-8B aquif-3.5-8B Reduction
AIME 2025 15,670 21,265 -26%
LiveCodeBench 13,240 19,460 -32%
GPQA Diamond 8,760 11,560 -24%
Average 12,557 17,428 -28%

Thinking Ratio

Chart 3
Benchmark aquif-3.6-8B aquif-3.5-8B Reduction
AIME 2025 93.0% 100.0% -7%
LiveCodeBench 82.0% 100.0% -18%
GPQA Diamond 89.0% 100.0% -11%
Average 88.0% 100.0% -12%

Benchmark Highlights

  • AIME 2025: 26% fewer tokens, +1% performance, -7% thinking ratio
  • LiveCodeBench: 32% fewer tokens, +4% performance, -18% thinking ratio
  • GPQA Diamond: 24% fewer tokens, +6% performance, -11% thinking ratio

Model Details

  • Base Model: 8B parameters
  • Architecture: Hybrid reasoning with dynamic thinking allocation
  • Context Length: 40K tokens
  • License: Apache 2.0

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Edge-Quant/aquif-3.6-8B-Q4_K_M-GGUF --hf-file aquif-3.6-8b-q4_k_m.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Edge-Quant/aquif-3.6-8B-Q4_K_M-GGUF --hf-file aquif-3.6-8b-q4_k_m.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Edge-Quant/aquif-3.6-8B-Q4_K_M-GGUF --hf-file aquif-3.6-8b-q4_k_m.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Edge-Quant/aquif-3.6-8B-Q4_K_M-GGUF --hf-file aquif-3.6-8b-q4_k_m.gguf -c 2048
Downloads last month
-
GGUF
Model size
8B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Edge-Quant/aquif-3.6-8B-Q4_K_M-GGUF

Quantized
(3)
this model