UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations
Paper
•
2510.13774
•
Published
This repository provides the pretrained weights of the UrbanFusion model — a framework for learning robust spatial representations through stochastic multimodal fusion.
UrbanFusion can generate location encodings from any subset of the following modalities:
🔗 The full source code is available on GitHub, and further details are described in our paper.
@article{muehlematter2025urbanfusion,
title = {UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations},
author = {Dominik J. Mühlematter and Lin Che and Ye Hong and Martin Raubal and Nina Wiedemann},
year = {2025},
journal = {arXiv preprint arXiv:2510.13774}
}
Base model
BAAI/bge-small-en-v1.5