Javascript must be enabled to continue!

Publications


2021
Learning Contextual Tag Embeddings for Cross-Modal Alignment of Audio and Tags

X. Favory, K. Drossos, T. Virtanen, and X. Serra, "Learning Contextual Tag Embeddings for Cross-Modal Alignment of Audio and Tags," in proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Jun. 6-11, Torono, Canada, 2021

Self-supervised audio representation learning offers an attractive alternative for obtaining generic audio embeddings, capable to be employed into various downstream tasks. Published approaches that consider both audio and words/tags associated with audio do not employ text processing models that are capable to generalize to tags unknown during training. In this work we propose a method for learning audio representations using an audio autoencoder (AAE), a general word embeddings model (WEM), and a multi-head self-attention (MHA) mechanism. MHA attends on the output of the WEM, providing a contextualized representation of the tags associated with the audio, and we align the output of MHA with the output of the encoder of AAE using a contrastive loss. We jointly optimize AAE and MHA and we evaluate the audio representations (i.e. the output of the encoder of AAE) by utilizing them in three different downstream tasks, namely sound, music genre, and music instrument classification. Our results show that employing multi-head self-attention with multiple heads in the tag-based network can induce better learned audio representations.

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 08-04-2021 12:13 - Size: 415.41 KB
Attachment language: English File type: BiBTex LaTeX BibTeX record (.bib)
Updated: 08-04-2021 12:13 - Size: 351 B
BibTex Record (Popup)
Copy the citation
WaveTransformer: An Architecture for Audio Captioning Based on Learning Temporal and Time-Frequency Information

A. Tran, K. Drossos, and T. Virtanen, "WaveTransformer: An Architecture for Audio Captioning Based on Learning Temporal and Time-Frequency Information," in proceedings of 29th European Signal Processing Conference (EUSIPCO), Aug. 23-27, Dublin, Ireland, 2021

Automated audio captioning (AAC) is a novel task, where a method takes as an input an audio sample and outputs a textual description (i.e. a caption) of its contents. Most AAC methods are adapted from image captioning or machine translation fields. In this work, we present a novel AAC method, explicitly focused on the exploitation of the temporal and time-frequency patterns in audio. We employ three learnable processes for audio encoding, two for extracting the local and temporal information, and one to merge the output of the previous two processes. To generate the caption, we employ the widely used Transformer decoder. We assess our method utilizing the freely available splits of the Clotho dataset. Our results increase previously reported highest SPIDEr to 17.3, from 16.2.

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 06-05-2021 10:41 - Size: 230.2 KB
Attachment language: English File type: BiBTex LaTeX BibTeX record (.bib)
Updated: 06-05-2021 10:41 - Size: 336 B
BibTex Record (Popup)
Copy the citation

Subscribe to my newsletter