Javascript must be enabled to continue!

Publications


2019
Examining the Mapping Functions of Denoising Autoencoders in Music Source Separation [Journal]

Stylianos Ioannis Mimilakis, Konstantinos Drossos, Estefanía Cano, and Geralrd Schuller, “Examining the Mapping Functions of Denoising Autoencoders in Music Source Separation,” in IEEE/ACM Transaction on Audio, Speech, and Language Processing (TASLP), vol. 28, pp 262-278, 2019.

The goal of this work is to investigate what singing voice separation approaches based on neural networks learn from the data. We examine the mapping functions of neural networks based on the denoising autoencoder (DAE) model that are conditioned on the mixture magnitude spectra. To approximate the mapping functions, we propose an algorithm inspired by the knowledge distillation, denoted the neural couplings algorithm (NCA). The NCA yields a matrix that expresses the mapping of the mixture to the target source magnitude information. Using the NCA, we examine the mapping functions of three fundamental DAE-based models in music source separation; one with single-layer encoder and decoder, one with multi-layer encoder and single-layer decoder, and one using skip-filtering connections (SF) with a single-layer encoding and decoding. We first train these models with realistic data to estimate the singing voice magnitude spectra from the corresponding mixture. We then use the optimized models and test spectral data as input to the NCA. Our experimental findings show that approaches based on the DAE model learn scalar filtering operators, exhibiting a predominant diagonal structure in their corresponding mapping functions, limiting the exploitation of inter-frequency structure of music data. In contrast, skip-filtering connections are shown to assist the DAE model in learning filtering operators that exploit richer inter-frequency structures.

Attachment language: English File type: BiBTex LaTeX BibTeX record (.bib)
Updated: 29-11-2019 11:18 - Size: 331 B
BibTex Record (Popup)
Copy the citation
Crowdsourcing a Dataset of Audio Captions [Conference]

Samuel Lipping, Konstantinos Drossos, and Tuomas Virtanen, “Crowdsourcing a Dataset of Audio Captions,” in proceedings of the International Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE), Oct. 26–27, New York, NY, U.S.A., 2019

Audio captioning is a novel field of multi-modal translation and it is the task of creating a textual description of the content of an audio signal (e.g. "people talking in a big room"). The creation of a dataset for this task requires a considerable amount of work, rendering the crowdsourcing a very attractive option. In this paper we present a three steps based framework for crowdsourcing an audio captioning dataset, based on concepts and practises followed for the creation of widely used image captioning and machine translations datasets. During the first step initial captions are gathered. A grammatically corrected and/or rephrased version of each initial caption is obtained in second step. Finally, the initial and edited captions are rated, keeping the top ones for the produced dataset. We objectively evaluate the impact of our framework during the process of creating an audio captioning dataset, in terms of diversity and amount of typographical errors in the obtained captions. The obtained results show that the resulting dataset has less typographical errors than the initial captions, and on average each sound in the produced dataset has captions with a Jaccard similarity of 0.24, roughly equivalent to two ten-word captions having in common four words with the same root, indicating that the captions are dissimilar while they still contain some of the same information.

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 12-11-2019 23:59 - Size: 274.06 KB
Attachment language: English File type: BiBTex LaTeX BibTeX record (.bib)
Updated: 19-06-2021 10:21 - Size: 407 B
BibTex Record (Popup)
Copy the citation
Language Modelling for Sound Event Detection with Teacher Forcing and Scheduled Sampling [Conference]

Konstantinos Drossos, Shayan Gharib, Paul Magron, and Tuomas Virtanen, “Language Modelling for Sound Event Detection with Teacher Forcing and Scheduled Sampling,” in proceedings of the International Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE), Oct. 26–27, New York, NY, U.S.A., 2019

A sound event detection (SED) method typically takes as an input a sequence of audio frames and predicts the activities of sound events in each frame. In real-life recordings, the sound events exhibit some temporal structure: for instance, a "car horn" will likely be followed by a "car passing by". While this temporal structure is widely exploited in sequence prediction tasks (e.g., in machine translation), where language models (LM) are exploited, it is not satisfactorily modeled in SED. In this work we propose a method which allows a recurrent neural network (RNN) to learn an LM for the SED task. The method conditions the input of the RNN with the activities of classes at the previous time step. We evaluate our method using F1 score and error rate (ER) over three different and publicly available datasets; the TUT-SED Synthetic 2016 and the TUT Sound Events 2016 and 2017 datasets. The obtained results show an increase of 6% and 3% at the F1 (higher is better) and a decrease of 3% and 2% at ER (lower is better) for the TUT Sound Events 2016 and 2017 datasets, respectively, when using our method. On the contrary, with our method there is a decrease of 10% at F1 score and an increase of 11% at ER for the TUT-SED Synthetic 2016 dataset.

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 01-11-2019 22:12 - Size: 598.06 KB
Attachment language: English File type: BiBTex LaTeX BibTeX record (.bib)
Updated: 29-11-2019 11:19 - Size: 467 B
BibTex Record (Popup)
Copy the citation
Unsupervised Adversarial Domain Adaptation Based On The Wasserstein Distance For Acoustic Scene Classification [Conference]

Konstantinos Drossos, Paul Magron, and Tuomas Virtanen, “Unsupervised Adversarial Domain Adaptation Based On The Wasserstein Distance For Acoustic Scene Classification,” accepted for publication at the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), Oct. 20–23, N. Paltz, NY, U.S.A., 2019

A challenging problem in deep learning-based machine listening field is the degradation of the performance when using data from unseen conditions. In this paper we focus on the acoustic scene classification (ASC) task and propose an adversarial deep learning method to allow adapting an acoustic scene classification system to deal with a new acoustic channel resulting from data captured with a different recording device. We build upon the theoretical model of HΔH-distance and previous adversarial discriminative deep learning method for ASC unsupervised domain adaptation, and we present an adversarial training based method using the Wasserstein distance. We improve the state-of-the-art mean accuracy on the data from the unseen conditions from 32% to 45%, using the TUT Acoustic Scenes dataset.

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 01-11-2019 20:57 - Size: 352.4 KB
Attachment language: English File type: BiBTex LaTeX BibTeX record (.bib)
Updated: 29-11-2019 11:19 - Size: 355 B
BibTex Record (Popup)
Copy the citation

Subscribe to my newsletter