Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
QuantTrio
/
GLM-4.1V-9B-Thinking-GPTQ-Int4-Int8Mix
like
1
Follow
QuantTrio
161
Text Generation
Safetensors
glm4v
GPTQ
Int4-Int8Mix
vLLM
conversational
4-bit precision
gptq
arxiv:
2507.01006
License:
mit
Model card
Files
Files and versions
xet
Community
Use this model
main
GLM-4.1V-9B-Thinking-GPTQ-Int4-Int8Mix
9.5 GB
1 contributor
History:
5 commits
JunHowie
Delete .mv
a869799
verified
4 months ago
.gitattributes
1.57 kB
Upload folder using huggingface_hub
6 months ago
README.md
3.84 kB
Upload folder using huggingface_hub
6 months ago
chat_template.jinja
922 Bytes
Upload folder using huggingface_hub
6 months ago
config.json
1.97 kB
Upload folder using huggingface_hub
6 months ago
model-00001-of-00002.safetensors
5 GB
xet
Upload folder using huggingface_hub
6 months ago
model-00002-of-00002.safetensors
4.48 GB
xet
Upload folder using huggingface_hub
6 months ago
model.safetensors.index.json
168 kB
Upload folder using huggingface_hub
6 months ago
preprocessor_config.json
364 Bytes
Upload folder using huggingface_hub
6 months ago
requirements.txt
247 Bytes
Upload folder using huggingface_hub
6 months ago
tokenizer.json
20 MB
xet
Upload folder using huggingface_hub
6 months ago
tokenizer_config.json
4.8 kB
Upload folder using huggingface_hub
6 months ago
video_preprocessor_config.json
365 Bytes
Upload folder using huggingface_hub
6 months ago