Nasir
Your-Zeshan
AI & ML interests
Hardware: MacBook Pro 4 (M4 processor) with 48 GB RAM, typically 36 GB allocated to VM for LM Studio/MLX/local inference and optimal model performance.
Preferences: High-performance Apple Silicon-optimized models, particularly quantized formats (GGUF/MLX). Focus on models that run efficiently on Apple Silicon with MLX framework optimization. Prefer 4-bit and 8-bit quantized models for optimal memory usage within the 36 GB VM allocation.
Local Apps: Jan, MLX LM, LM Studio, Ollama - all configured for MacBook Pro 4 hardware specifications.
Recent Activity
reacted
to
RakshitAralimatti's
post
with 🚀
1 day ago
Just built my entire AI Engineer portfolio by pasting 2 links (GitHub and LinkedIn) into https://huggingface.co/moonshotai Kimi 2.5.
That's it. That's the workflow.
Zero coding. Zero iteration. Zero "make the button bigger."
See for yourself: https://rakshit2020.github.io/rakshitaralimatti.github.io/
The model:
✅ Scraped my GitHub repos automatically
✅ Pulled my experience from LinkedIn
✅ Designed an Aurora Glass theme
✅ Mapped every skill to projects
✅ Added animations I'd never code myself
liked
a Space
2 months ago
baidu/ERNIE-4.5-VL-28B-A3B-Thinking
liked
a model
3 months ago
nightmedia/Qwen3-Next-80B-A3B-Instruct-qx86n-mlx
Organizations
None yet