RaBiT: Residual-Aware Binarization Training for Accurate and Efficient LLMs Paper • 2602.05367 • Published 7 days ago • 7
DFlash: Block Diffusion for Flash Speculative Decoding Paper • 2602.06036 • Published 6 days ago • 40
POP: Prefill-Only Pruning for Efficient Large Model Inference Paper • 2602.03295 • Published 9 days ago • 4
Fairy2i: Training Complex LLMs from Real LLMs with All Parameters in {pm 1, pm i} Paper • 2512.02901 • Published Dec 2, 2025 • 6
Every Token Counts: Generalizing 16M Ultra-Long Context in Large Language Models Paper • 2511.23319 • Published Nov 28, 2025 • 24
Metis: Training Large Language Models with Advanced Low-Bit Quantization Paper • 2509.00404 • Published Aug 30, 2025 • 7
Jamba 1.7 Collection The AI21 Jamba family of models are hybrid SSM-Transformer foundation models, blending speed, efficient long context processing, and accuracy. • 4 items • Updated Jul 2, 2025 • 12