Learning Tri-modal Embeddings for Zero-Shot Soundscape Mapping


Subash Khanal (Washington University in Saint Louis),* Srikumar Sastry (Washington University in St. Louis), Aayush Dhakal (Washington University in St Louis), Nathan Jacobs (Washington University in St. Louis)
The 34th British Machine Vision Conference

Abstract

We focus on the task of soundscape mapping, which involves predicting the most probable sounds that could be perceived at a particular geographic location. We utilise recent state-of-the-art models to encode geotagged audio, a textual description of the audio, and an overhead image of its capture location using contrastive pre-training. The end result is a shared embedding space for the three modalities, which enables the construction of soundscape maps for any geographic region from textual or audio queries. Using the SoundingEarth dataset, we find that our approach significantly outperforms the existing SOTA, with an improvement of image-to-audio Recall@100 from 0.256 to 0.450. Our code is available at https://github.com/mvrl/geoclap

Video



Citation

@inproceedings{Khanal_2023_BMVC,
author    = {Subash Khanal and Srikumar Sastry and Aayush Dhakal and Nathan Jacobs},
title     = {Learning Tri-modal Embeddings for Zero-Shot Soundscape Mapping},
booktitle = {34th British Machine Vision Conference 2023, {BMVC} 2023, Aberdeen, UK, November 20-24, 2023},
publisher = {BMVA},
year      = {2023},
url       = {https://papers.bmvc2023.org/0813.pdf}
}


Copyright © 2023 The British Machine Vision Association and Society for Pattern Recognition
The British Machine Vision Conference is organised by The British Machine Vision Association and Society for Pattern Recognition. The Association is a Company limited by guarantee, No.2543446, and a non-profit-making body, registered in England and Wales as Charity No.1002307 (Registered Office: Dept. of Computer Science, Durham University, South Road, Durham, DH1 3LE, UK).

Imprint | Data Protection