Edge-Quant/aquif-3.6-8B-Q4_K_M-GGUF
This model was converted to GGUF format from aquif-ai/aquif-3.6-8B using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
aquif-3.6-8B
Summary
aquif-3.6-8B is a hybrid reasoning model that automatically determines when and how deeply to think based on query complexity. Built on aquif-3.5-8B-Think with AutoThink RL data, it achieves 28% better token efficiency and 4% performance improvement across benchmarks.
Contents
- Key Features - Dynamic reasoning, efficiency gains, and smart resource allocation
- Performance - Benchmark results showing 4% average improvement
- Token Efficiency - 28% reduction in token usage
- Thinking Ratio - 12% reduction in thinking frequency
- Benchmark Highlights - Detailed results for AIME, LiveCodeBench, and GPQA Diamond
- Model Details - Architecture and specifications
- Usage - Code examples for implementation
- Previous Versions - Links to earlier models
Automatic Thinking
aquif-3.6-8B is a hybrid reasoning model that dynamically decides if and how much to think based on query complexity. Inspired by KAT-V1's approach of automatic thinking using AutoThink RL data on top of aquif-3.5-8B-Think, the model uses the following format:
<judge>
[analyzes whether to think or not]
</judge>
<think_on/off>
<think>
[thinking content]
</think>
<answer>
</answer>
This is the same format as KAT-V1-40B. Unlike something like DeepSeek-V3.1's toggleable reasoning that requires manual control (thinking_on/off), aquif-3.6's judge autonomously allocates reasoning depth - intelligently adapting its cognitive effort to each task automatically.
Key Features
- ๐ง Dynamic Reasoning: Automatically determines when and how deeply to think
- โก 28% More Efficient: Significant token reduction while improving performance
- ๐ Better Performance: 4% average improvement across benchmarks
- ๐ฏ Smart Resource Allocation: 12% reduction in thinking ratio on average
Performance
| Benchmark | aquif-3.6-8B | aquif-3.5-8B | Improvement |
|---|---|---|---|
| AIME 2025 | 82.5 | 81.4 | +1% |
| LiveCodeBench | 64.2 | 61.5 | +4% |
| GPQA Diamond | 71.0 | 66.8 | +6% |
| Average | 72.6 | 69.9 | +4% |
Token Efficiency
| Benchmark | aquif-3.6-8B | aquif-3.5-8B | Reduction |
|---|---|---|---|
| AIME 2025 | 15,670 | 21,265 | -26% |
| LiveCodeBench | 13,240 | 19,460 | -32% |
| GPQA Diamond | 8,760 | 11,560 | -24% |
| Average | 12,557 | 17,428 | -28% |
Thinking Ratio
| Benchmark | aquif-3.6-8B | aquif-3.5-8B | Reduction |
|---|---|---|---|
| AIME 2025 | 93.0% | 100.0% | -7% |
| LiveCodeBench | 82.0% | 100.0% | -18% |
| GPQA Diamond | 89.0% | 100.0% | -11% |
| Average | 88.0% | 100.0% | -12% |
Benchmark Highlights
- AIME 2025: 26% fewer tokens, +1% performance, -7% thinking ratio
- LiveCodeBench: 32% fewer tokens, +4% performance, -18% thinking ratio
- GPQA Diamond: 24% fewer tokens, +6% performance, -11% thinking ratio
Model Details
- Base Model: 8B parameters
- Architecture: Hybrid reasoning with dynamic thinking allocation
- Context Length: 40K tokens
- License: Apache 2.0
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo Edge-Quant/aquif-3.6-8B-Q4_K_M-GGUF --hf-file aquif-3.6-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo Edge-Quant/aquif-3.6-8B-Q4_K_M-GGUF --hf-file aquif-3.6-8b-q4_k_m.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
git clone https://github.com/ggerganov/llama.cpp
Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference through the main binary.
./llama-cli --hf-repo Edge-Quant/aquif-3.6-8B-Q4_K_M-GGUF --hf-file aquif-3.6-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
or
./llama-server --hf-repo Edge-Quant/aquif-3.6-8B-Q4_K_M-GGUF --hf-file aquif-3.6-8b-q4_k_m.gguf -c 2048
- Downloads last month
- -
4-bit