-
mvp-lab/LLaVA-OneVision-1.5-Instruct-Data
Viewer • Updated • 21.9M • 66.8k • 65 -
mvp-lab/LLaVA-OneVision-1.5-Mid-Training-85M
Viewer • Updated • 91.5M • 155k • 59 -
lmms-lab/LLaVA-OneVision-1.5-8B-Instruct
Image-Text-to-Text • 9B • Updated • 40.1k • 62 -
lmms-lab/LLaVA-OneVision-1.5-4B-Instruct
Image-Text-to-Text • 5B • Updated • 3.67k • 18
AI & ML interests
multi-modal foundation models
Recent Activity
View all activity
-
mvp-lab/LLaVA-OneVision-1.5-Instruct-Data
Viewer • Updated • 21.9M • 66.8k • 65 -
mvp-lab/LLaVA-OneVision-1.5-Mid-Training-85M
Viewer • Updated • 91.5M • 155k • 59 -
lmms-lab/LLaVA-OneVision-1.5-8B-Instruct
Image-Text-to-Text • 9B • Updated • 40.1k • 62 -
lmms-lab/LLaVA-OneVision-1.5-4B-Instruct
Image-Text-to-Text • 5B • Updated • 3.67k • 18
datasets 6
mvp-lab/LLaVA-OneVision-1.5-RL-Data
Viewer
• Updated
• 69.2k • 384 • 6
mvp-lab/LLaVA-OneVision-1.5-Mid-Training-85M
Viewer
• Updated
• 91.5M • 155k • 59
mvp-lab/LLaVA-OneVision-1.5-Instruct-Data
Viewer
• Updated
• 21.9M • 66.8k • 65
mvp-lab/LLaVA-558K-Webdataset
Updated
• 406 • 4
mvp-lab/LLaVA-NeXT-780k-webdataset
Updated
• 967
mvp-lab/LLaVA-OneVision-1.5-Mid-Training-Webdataset-Quick-Start-3M
Updated
• 6.38k • 2