Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
moxin-org
/
MiniMax-M2-GGUF
like
5
Follow
Moxin Organization
47
Text Generation
GGUF
MiniMaxAI
MiniMaxM2ForCausalLM
GGUF
llama.cpp
moxin-org
imatrix
conversational
arxiv:
2509.25689
License:
mit
Model card
Files
Files and versions
xet
Community
1
Deploy
Use this model
main
MiniMax-M2-GGUF
/
Q8_0
243 GB
1 contributor
History:
1 commit
bobchenyx
Upload folder using huggingface_hub
14112d1
verified
8 days ago
MiniMax-M2-Q8_0-00001-of-00010.gguf
Safe
24.8 GB
xet
Upload folder using huggingface_hub
8 days ago
MiniMax-M2-Q8_0-00002-of-00010.gguf
Safe
24.7 GB
xet
Upload folder using huggingface_hub
8 days ago
MiniMax-M2-Q8_0-00003-of-00010.gguf
Safe
24.7 GB
xet
Upload folder using huggingface_hub
8 days ago
MiniMax-M2-Q8_0-00004-of-00010.gguf
Safe
24.7 GB
xet
Upload folder using huggingface_hub
8 days ago
MiniMax-M2-Q8_0-00005-of-00010.gguf
Safe
24.7 GB
xet
Upload folder using huggingface_hub
8 days ago
MiniMax-M2-Q8_0-00006-of-00010.gguf
Safe
24.7 GB
xet
Upload folder using huggingface_hub
8 days ago
MiniMax-M2-Q8_0-00007-of-00010.gguf
Safe
24.7 GB
xet
Upload folder using huggingface_hub
8 days ago
MiniMax-M2-Q8_0-00008-of-00010.gguf
Safe
24.7 GB
xet
Upload folder using huggingface_hub
8 days ago
MiniMax-M2-Q8_0-00009-of-00010.gguf
Safe
24.7 GB
xet
Upload folder using huggingface_hub
8 days ago
MiniMax-M2-Q8_0-00010-of-00010.gguf
Safe
20.8 GB
xet
Upload folder using huggingface_hub
8 days ago