Javascript must be enabled to continue!

Publications


2020
Clotho: An Audio Captioning Dataset [Conference]

Konstantinos Drossos, Samuel Lipping, Tuomas Virtanen, “Clotho: An Audio Captioning Dataset,” in proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 04–08, Barcelona, Spain, 2020

Audio captioning is the novel task of general audio content description using free text. It is an intermodal translation task (not speech-to-text), where a system accepts as an input an audio signal and outputs the textual description (i.e. the caption) of that signal. In this paper we present Clotho, a dataset for audio captioning consisting of 4981 audio samples of 15 to 30 seconds duration and 24 905 captions of eight to 20 words length, and a baseline method to provide initial results. Clotho is built with focus on audio content and caption diversity, and the splits of the data are not hampering the training or evaluation of methods. All sounds are from the Freesound platform, and captions are crowdsourced using Amazon Mechanical Turk and annotators from English speaking countries. Unique words, named entities, and speech transcription are removed with post-processing. Clotho is freely available online (https://zenodo.org/record/3490684).

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 11-04-2020 19:51 - Size: 224.77 KB
Attachment language: English File type: BiBTex LaTeX BibTeX record (.bib)
Updated: 11-04-2020 19:56 - Size: 297 B
BibTex Record (Popup)
Copy the citation
COALA: Co-Aligned Autoencoders for Learning Semantically Enriched Audio Representations [Conference]

Xavier Favory, Konstantinos Drossos, Tuomas Virtanen, and Xavier Serra, "COALA: Co-Aligned Autoencoders for Learning Semantically Enriched Audio Representations," in International Conference on Machine Learning (ICML), Workshop on Self-supervised learning in Audio and Speech, Jul. 17, virtually held, 2020

Audio representation learning based on deep neural networks (DNNs) emerged as an alternative approach to hand-crafted features. For achieving high performance, DNNs often need a large amount of annotated data which can be difficult and costly to obtain. In this paper, we propose a method for learning audio representations, aligning the learned latent representations of audio and associated tags. Aligning is done by maximizing the agreement of the latent representations of audio and tags, using a contrastive loss. The result is an audio embedding model which reflects acoustic and semantic characteristics of sounds. We evaluate the quality of our embedding model, measuring its performance as a feature extractor on three different tasks (namely, sound event recognition, and music genre and musical instrument classification), and investigate what type of characteristics the model captures. Our results are promising, sometimes in par with the state-of-the-art in the considered tasks and the embeddings produced with our method are well correlated with some acoustic descriptors.

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 28-08-2020 09:41 - Size: 483.83 KB
Attachment language: English File type: BiBTex LaTeX BibTex record (.bib)
Updated: 28-08-2020 09:49 - Size: 375 B
BibTex Record (Popup)
Copy the citation
Depthwise Separable Convolutions Versus Recurrent Neural Networks for Monaural Singing Voice Separation [Conference]

Pyry Pyykkönen, Styliannos I. Mimilakis, Konstantinos Drossos, and Tuomas Virtanen, "Depthwise Separable Convolutions Versus Recurrent Neural Networks for Monaural Singing Voice Separation," in proceedings of the 22nd IEEE International Workshop on Multimedia Signal Processing (MMSP), Sep. 21-24, Tampere, Finland, 2020

Recent approaches for music source separation are almost exclusively based on deep neural networks, mostly employing recurrent neural networks (RNNs). Although RNNs are in many cases superior than other types of deep neural networks for sequence processing, they are known to have specific difficulties in training and parallelization, especially for the typically long sequences encountered in music source separation. In this paper we present a use-case of replacing RNNs with depth-wise separable (DWS) convolutions, which are a lightweight and faster variant of the typical convolutions. We focus on singing voice separation, employing an RNN architecture, and we replace the RNNs with DWS convolutions (DWS-CNNs). We conduct an ablation study and examine the effect of the number of channels and layers of DWS-CNNs on the source separation performance, by utilizing the standard metrics of signal-to-artifacts, signal-to-interference, and signal-to-distortion ratio. Our results show that by replacing RNNs with DWS-CNNs yields an improvement of 1.20, 0.06, 0.37 dB, respectively, while using only 20.57% of the amount of parameters of the RNN architecture.

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 28-08-2020 09:47 - Size: 147.6 KB
Attachment language: English File type: BiBTex LaTeX BibTex record (.bib)
Updated: 28-08-2020 09:47 - Size: 376 B
BibTex Record (Popup)
Copy the citation
Memory Requirement Reduction of Deep Neural Networks Using Low-bit Quantization of Parameters [Conference]

Niccoló Nicodemo, Gaurav Naithani, Konstantinos Drossos, Tuomas Virtanen, Roberto Saletti, "Memory Requirement Reduction of Deep Neural Networks Using Low-bit Quantization of Parameters," in proceedings of the 28th European Signal Processing Conference (EUSIPCO), Aug. 24 - 28, Amsterdam, Netherlands, 2020

Effective employment of deep neural networks (DNNs) in mobile devices and embedded systems is hampered by requirements for memory and computational power. This paper presents a non-uniform quantization approach which allows for dynamic quantization of DNN parameters for different layers and within the same layer. A virtual bit shift (VBS) scheme is also proposed to improve the accuracy of the proposed scheme. Our method reduces the memory requirements, preserving the performance of the network. The performance of our method is validated in a speech enhancement application, where a fully connected DNN is used to predict the clean speech spectrum from the input noisy speech spectrum. A DNN is optimized and its memory footprint and performance are evaluated using the short-time objective intelligibility, STOI, metric. The application of the low-bit quantization allows a 50% reduction of the DNN memory footprint while the STOI performance drops only by 2.7%.

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 31-05-2020 15:14 - Size: 342.82 KB
Attachment language: English File type: BiBTex LaTeX BibTeX record (.bib)
Updated: 31-05-2020 15:27 - Size: 361 B
BibTex Record (Popup)
Copy the citation
Multi-task Regularization Based on Infrequent Classes for Audio Captioning [Conference]

Emre Çakır, Konstantinos Drossos, Tuomas Virtanen, "Multi-task Regularization Based on Infrequent Classes for Audio Captioning," in proceedings of the International Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE), Nov. 2-3, Tokyo, Japan (full virtual), 2020

Audio captioning is a multi-modal task, focusing on using natural language for describing the contents of general audio. Most audio captioning methods are based on deep neural networks, employing an encoder-decoder scheme and a dataset with audio clips and corresponding natural language descriptions (i.e. captions). A significant challenge for audio captioning is the distribution of words in the captions: some words are very frequent but acoustically non-informative, i.e. the function words (e.g. "a", "the"), and other words are infrequent but informative, i.e. the content words (e.g. adjectives, nouns). In this paper we propose two methods to mitigate this class imbalance problem. First, in an autoencoder setting for audio captioning, we weigh each word's contribution to the training loss inversely proportional to its number of occurrences in the whole dataset. Secondly, in addition to multi-class, word-level audio captioning task, we define a multi-label side task based on clip-level content word detection by training a separate decoder. We use the loss from the second task to regularize the jointly trained encoder for the audio captioning task. We evaluate our method using Clotho, a recently published, wide-scale audio captioning dataset, and our results show an increase of 37% relative improvement with SPIDEr metric over the baseline method.

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 11-09-2020 10:47 - Size: 188.35 KB
Attachment language: English File type: BiBTex LaTeX BibTex record (.bib)
Updated: 11-09-2020 10:47 - Size: 415 B
BibTex Record (Popup)
Copy the citation
Multichannel Singing Voice Separation by Deep Neural Network Informed DOA Constrained CNMF [Conference]

Antonio J. Muñoz-Montoro, Julio J. Carabias-Orti, Archontis Politis, and Konstantinos Drossos, "Multichannel Singing Voice Separation by Deep Neural Network Informed DOA Constrained CNMF," in proceedings of the 22nd IEEE International Workshop on Multimedia Signal Processing (MMSP), Sep. 21-24, Tampere, Finland, 2020

This work addresses the problem of multichannel source separation combining two powerful approaches, multichannel spectral factorization with recent monophonic deep-learning (DL) based spectrum inference. Individual source spectra at different channels are estimated with a Masker-Denoiser Twin Network (MaD TwinNet), able to model long-term temporal patterns of a musical piece. The monophonic source spectrograms are used within a spatial covariance mixing model based on Complex Non-Negative Matrix Factorization (CNMF) that predicts the spatial characteristics of each source. The proposed framework is evaluated on the task of singing voice separation with a large multichannel dataset. Experimental results show that our joint DL+CNMF method outperforms both the individual monophonic DL-based separation and the multichannel CNMF baseline methods.

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 28-08-2020 09:52 - Size: 173.71 KB
Attachment language: English File type: BiBTex LaTeX BibTex record (.bib)
Updated: 28-08-2020 09:52 - Size: 378 B
BibTex Record (Popup)
Copy the citation
Sound Event Detection via Dilated Convolutional Recurrent Neural Networks [Conference]

Yanxiong Li, Mingle Liu, Konstantinos Drossos, Tuomas Virtanen, “Sound event detection via dilated convolutional recurrent neural networks,” in proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 04–08, Barcelona, Spain, 2020

Convolutional recurrent neural networks (CRNNs) have achieved state-of-the-art performance for sound event detection (SED). In this paper, we propose to use a dilated CRNN, namely a CRNN with a dilated convolutional kernel, as the classifier for the task of SED. We investigate the effectiveness of dilation operations which provide a CRNN with expanded receptive fields to capture long temporal context without increasing the amount of CRNN's parameters. Compared to the classifier of the baseline CRNN, the classifier of the dilated CRNN obtains a maximum increase of 1.9%, 6.3% and 2.5% at F1 score and a maximum decrease of 1.7%, 4.1% and 3.9% at error rate (ER), on the publicly available audio corpora of the TUT-SED Synthetic 2016, the TUT Sound Event 2016 and the TUT Sound Event 2017, respectively.

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 11-04-2020 19:47 - Size: 365.57 KB
Attachment language: English File type: BiBTex LaTeX BibTeX record (.bib)
Updated: 11-04-2020 19:56 - Size: 345 B
BibTex Record (Popup)
Copy the citation
Sound Event Detection with Depthwise Separable and Dilated Convolutions [Conference]

Konstantinos Drossos, Stylianos I. Mimilakis, Shayan Gharib, Yanxiong Li, Tuomas Virtanen “Sound Event Detection with Depthwise Separable and Dilated Convolutions,” in proceedings of the IEEE World Congress on Computational Intelligence/International Joint Conference on Neural Networks (WCCI/IJCNN), Jul. 19–24, Glasgow, Scotland, 2020

State-of-the-art sound event detection (SED) methods usually employ a series of convolutional neural networks (CNNs) to extract useful features from the input audio signal, and then recurrent neural networks (RNNs) to model longer temporal context in the extracted features. The number of the channels of the CNNs and size of the weight matrices of the RNNs have a direct effect on the total amount of parameters of the SED method, which is to a couple of millions. Additionally, the usually long sequences that are used as an input to an SED method along with the employment of an RNN, introduce implications like increased training time, difficulty at gradient flow, and impeding the parallelization of the SED method. To tackle all these problems, we propose the replacement of the CNNs with depthwise separable convolutions and the replacement of the RNNs with dilated convolutions. We compare the proposed method to a baseline convolutional neural network on a SED task, and achieve a reduction of the amount of parameters by 85% and average training time per epoch by 78%, and an increase the average frame-wise F1 score and reduction of the average error rate by 4.6% and 3.8%, respectively.

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 11-04-2020 19:57 - Size: 414.35 KB
Attachment language: English File type: BiBTex LaTeX BibTeX record (.bib)
Updated: 11-04-2020 19:57 - Size: 337 B
BibTex Record (Popup)
Copy the citation
Temporal Sub-sampling of Audio Feature Sequences for Automated Audio Captioning [Conference]

Khoa Nguyen, Konstantinos Drossos, Tuomas Virtanen, "Temporal Sub-sampling of Audio Feature Sequences for Automated Audio Captioning," in proceedings of the International Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE), Nov. 2-3, Tokyo, Japan (full virtual), 2020

Audio captioning is the task of automatically creating a textual description for the contents of a general audio signal. Typical audio captioning methods rely on deep neural networks (DNNs), where the target of the DNN is to map the input audio sequence to an output sequence of words, i.e. the caption. Though, the length of the textual description is considerably less than the length of the audio signal, for example 10 words versus some thousands of audio feature vectors. This clearly indicates that an output word corresponds to multiple input feature vectors. In this work we present an approach that focuses on explicitly taking advantage of this difference of lengths between sequences, by applying a temporal sub-sampling to the audio input sequence. We employ a sequence-to-sequence method, which uses a fixed-length vector as an output from the encoder, and we apply temporal sub-sampling between the RNNs of the encoder. We evaluate the benefit of our approach by employing the freely available dataset Clotho and we evaluate the impact of different factors of temporal sub-sampling. Our results show an improvement to all considered metrics.

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 11-09-2020 10:51 - Size: 349.69 KB
Attachment language: English File type: BiBTex LaTeX BibTeX record (.bib)
Updated: 11-09-2020 10:51 - Size: 414 B
BibTex Record (Popup)
Copy the citation
Unsupervised Interpretable Representation Learning for Singing Voice Separation [Conference]

Stylianos I. Mimilakis, Konstantinos Drossos, Gerald Schuller, "Unsupervised Interpretable Representation Learning for Singing Voice Separation," in proceedings of the 28th European Signal Processing Conference (EUSIPCO), Jan. 18 - 22 (2021), Amsterdam, Netherlands, 2020

In this work, we present a method for learning interpretable music signal representations directly from waveform signals. Our method can be trained using unsupervised objectives and relies on the denoising auto-encoder model that uses a simple sinusoidal model as decoding functions to reconstruct the singing voice. To demonstrate the benefits of our method, we employ the obtained representations to the task of informed singing voice separation via binary masking, and measure the obtained separation quality by means of scale-invariant signal to distortion ratio. Our findings suggest that our method is capable of learning meaningful representations for singing voice separation, while preserving conveniences of the the short-time Fourier transform like non-negativity, smoothness, and reconstruction subject to time-frequency masking, that are desired in audio and music source separation.

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 31-05-2020 15:26 - Size: 399.09 KB
Attachment language: English File type: BiBTex LaTeX BibTeX record (.bib)
Updated: 31-05-2020 15:26 - Size: 315 B
BibTex Record (Popup)
Copy the citation

Subscribe to my newsletter