Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
VibeStudio
/
MiniMax-M2-THRIFT-55-MLX-4bit
like
2
Follow
VibeStudio
106
MLX
Safetensors
nick007x/github-code-2025
tatsu-lab/alpaca
minimax
Mixture of Experts
bfloat16
sglang
custom_code
4-bit precision
License:
mit
Model card
Files
Files and versions
xet
Community
1
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Sort: Recently created
Could you please quantize a 6-bit or 6.5-bit model that requires 93-97GB of memory? Such a model could be deployed on a Mac with 128GB of RAM. Thank you very much!
#1 opened 5 days ago by
mimeng1990