aquif-3.6-1B

Summary

aquif-3.6-1B is a hybrid reasoning model that automatically determines when and how deeply to think based on query complexity. Built on aquif-3.5-Nano-1B with AutoThink RL data, it achieves 28% better token efficiency and 4% performance improvement across benchmarks.

Contents

Automatic Thinking

aquif-3.6-1B is a hybrid reasoning model that dynamically decides if and how much to think based on query complexity. Inspired by aquif-3.6-8B's approach of automatic thinking using AutoThink RL data on top of aquif-3.5-Nano-1B, the model uses the following format:

<judge>
[analyzes whether to think or not]
</judge>

<think_on/off>
<think>
[thinking content]
</think>

<answer>
</answer>

This is the same format as aquif-3.6-8B. Unlike something like aquif-3.5-Plus's toggleable reasoning that requires manual control (thinking_on/off), aquif-3.6's judge autonomously allocates reasoning depth - intelligently adapting its cognitive effort to each task automatically.

Key Features

  • ๐Ÿง  Dynamic Reasoning: Automatically determines when and how deeply to think
  • โšก 28% More Efficient: Significant token reduction while improving performance
  • ๐Ÿ“ˆ Better Performance: 4% average improvement across benchmarks
  • ๐ŸŽฏ Smart Resource Allocation: 12% reduction in thinking ratio on average

Performance

Benchmark aquif-3.6-1B Qwen3-1.7B Improvement
AIME 2025 75.0 39.4 +35.6%
LiveCodeBench 57.5 33.2 +24.3%
GPQA Diamond 52.8 40.1 +12.7%
Average 61.8 37.6 +24.2%

Token Efficiency

Benchmark aquif-3.6-1B Qwen3-1.7B Reduction
AIME 2025 13,670 18,450 -26%
LiveCodeBench 10,270 13,890 -26%
GPQA Diamond 6,870 12,100 -43%
Average 10,270 14,813 -32%

Thinking Ratio

Benchmark aquif-3.6-1B Qwen3-1.7B Reduction
AIME 2025 84.0% 100.0% -16%
LiveCodeBench 78.0% 100.0% -22%
GPQA Diamond 81.0% 100.0% -19%
Average 81.0% 100.0% -19%

Benchmark Highlights

  • AIME 2025: 26% fewer tokens, +35.6% performance, -16% thinking ratio
  • LiveCodeBench: 26% fewer tokens, +24.3% performance, -22% thinking ratio
  • GPQA Diamond: 43% fewer tokens, +12.7% performance, -19% thinking ratio

Model Details

  • Base Model: 1.7B parameters
  • Architecture: Hybrid reasoning with dynamic thinking allocation
  • Context Length: 40K tokens
  • License: Apache 2.0

Usage

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Edge-Quant/aquif-3.6-1B-Q4_K_M-GGUF --hf-file aquif-3.6-1b-q4_k_m.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Edge-Quant/aquif-3.6-1B-Q4_K_M-GGUF --hf-file aquif-3.6-1b-q4_k_m.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Edge-Quant/aquif-3.6-1B-Q4_K_M-GGUF --hf-file aquif-3.6-1b-q4_k_m.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Edge-Quant/aquif-3.6-1B-Q4_K_M-GGUF --hf-file aquif-3.6-1b-q4_k_m.gguf -c 2048
Downloads last month
-
GGUF
Model size
2B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Edge-Quant/aquif-3.6-1B-Q4_K_M-GGUF

Finetuned
Qwen/Qwen3-1.7B
Quantized
(3)
this model