For T2T task of Workshop on Asian Translation(2025), these are the fine-tuned models with NLLB-200-XB as base model, with WAT + 100k samanantar pairs.
Debasish Dhal
DebasishDhal99
AI & ML interests
None yet
Recent Activity
upvoted
a
paper
8 days ago
mHC: Manifold-Constrained Hyper-Connections
upvoted
an
article
15 days ago
The Optimal Architecture for Small Language Models
liked
a dataset
about 1 month ago
amd/SAND-Post-Training-Dataset