UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations

This repository provides the pretrained weights of the UrbanFusion model — a framework for learning robust spatial representations through stochastic multimodal fusion.

UrbanFusion can generate location encodings from any subset of the following modalities:

  • 📍 Geographic coordinates
  • 🏙️ Street-view imagery
  • 🛰️ Remote sensing data
  • 🗺️ OSM basemaps
  • 🏬 Points of interest (POIs)

🔗 The full source code is available on GitHub, and further details are described in our paper.


📖 Citation

@article{muehlematter2025urbanfusion,
  title   = {UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations},
  author  = {Dominik J. Mühlematter and Lin Che and Ye Hong and Martin Raubal and Nina Wiedemann},
  year    = {2025},
  journal = {arXiv preprint arXiv:2510.13774}
}

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DominikM198/UrbanFusion

Finetuned
(280)
this model

Dataset used to train DominikM198/UrbanFusion

Paper for DominikM198/UrbanFusion