Javascript must be enabled to continue!

Publications

Keyword: speech denoising (2) Back

2025
Knowledge Distillation for Speech Denoising by Latent Representation Alignment with Cosine Distance [Conference]

Diep Luong, Mikko Heikkinen, Konstantinos Drossos, and Tuomas Virtanen, “Knowledge Distillation for Speech Denoising by Latent Representation Alignment with Cosine Distance,” 158th Audio Engineering Society Convention, May 22–24, Warsaw, Poland, 2025

Speech denoising is a prominent and widely utilized task, appearing in many common use-cases. Although there are very powerful published machine learning methods, most of those are too complex for deployment in everyday and/or low resources computational environments, like hand-held devices, smart glasses, hearing aids, automotive platforms, etc. Knowledge distillation (KD) is a prominent way for alleviating this complexity mismatch, by transferring the learned knowledge from a pre-trained complex model, the teacher, to another less complex one, the student. KD is implemented by using minimization criteria (e.g. loss functions) between learned information of the teacher and the corresponding one from the student. Existing KD methods for speech denoising hamper the KD by bounding the learning of the student to the distribution learned by the teacher. Our work focuses on a method that tries to alleviate this issue, by exploiting properties of the cosine similarity used as the KD loss function. We use a publicly available dataset, a typical architecture for speech denoising (e.g. UNet) that is tuned for low resources environments and conduct repeated experiments with different architectural variations between the teacher and the student, reporting mean and standard deviation of metrics of our method and another, state-of-the-art method that is used as a baseline. Our results show that with our method we can make smaller speech denoising models, capable to be deployed into small devices/embedded systems, to perform better compared to when typically trained and when using other KD methods.

Attachment language: English File type: PDF document Paper (.pdf)
Updated: 21-09-2025 17:10 - Size: 615.44 KB
Attachment language: English File type: BiBTex LaTeX BibTex record (.bib)
Updated: 21-09-2025 17:10 - Size: 310 B
BibTex Record (Popup)
Copy the citation
Lightweight DNN for Full-Band Speech Denoising on Mobile Devices: Exploiting Long and Short Temporal Patterns [Conference]

K. Drossos, M. Heikkinen, P. Tsiaflakis, "Lightweight DNN for Full-Band Speech Denoising on Mobile Devices: Exploiting Long and Short Temporal Patterns," in proceedings of the 27th IEEE International Workshop on Multimedia Signal Processing (MMSP 2025), Tsinghua, China, 2025

Speech denoising (SD) is an important task of many, if not all, modern signal processing chains used in devices and for everyday-life applications. While there are many published and powerful deep neural network (DNN)-based methods for SD, few are optimized for resource-constrained platforms such as mobile devices. Additionally, most DNN-based methods for SD are not focusing on full-band (FB) signals, i.e. having 48 kHz sampling rate, and/or low latency cases. In this paper we present a causal, low latency, and lightweight DNN-based method for full-band SD, leveraging both short and long temporal patterns. The method is based on a modified UNet architecture employing look-back frames, temporal spanning of convolutional kernels, and recurrent neural networks for exploiting short and long temporal patterns in the signal and estimated denoising mask. The DNN operates on a causal frame-by-frame basis taking as an input the STFT magnitude, utilizes inverted bottlenecks inspired by MobileNet, employs causal instance normalization for channel-wise normalization, and achieves a real-time factor below 0.02 when deployed on a modern mobile phone. The proposed method is evaluated using established speech denoising metrics and publicly available datasets, demonstrating its effectiveness in achieving an (SI-)SDR value that outperforms existing FB and low latency SD methods.

Attachment language: English File type: PDF document Paper (pdf)
Updated: 06-01-2026 09:57 - Size: 370.95 KB
Attachment language: English File type: BiBTex LaTeX BibTex record (.bib)
Updated: 06-01-2026 09:57 - Size: 349 B
BibTex Record (Popup)
Copy the citation