AUTHOR=Elminshawi Mohamed , Mack Wolfgang , Chetupalli Srikanth Raj , Chakrabarty Soumitro , Habets Emanuël A. P. TITLE=New insights on the role of auxiliary information in target speaker extraction JOURNAL=Frontiers in Signal Processing VOLUME=4 YEAR=2024 URL=https://www.frontiersin.org/journals/signal-processing/articles/10.3389/frsip.2024.1440401 DOI=10.3389/frsip.2024.1440401 ISSN=2673-8198 ABSTRACT=

Speaker extraction (SE) aims to isolate the speech of a target speaker from a mixture of interfering speakers with the help of auxiliary information. Several forms of auxiliary information have been employed in single-channel SE, such as a speech snippet enrolled from the target speaker or visual information corresponding to the spoken utterance. The effectiveness of the auxiliary information in SE is typically evaluated by comparing the extraction performance of SE with uninformed speaker separation (SS) methods. Following this evaluation procedure, many SE studies have reported performance improvement compared to SS, attributing this to the auxiliary information. However, recent advancements in deep neural network architectures, which have shown remarkable performance for SS, suggest an opportunity to revisit this conclusion. In this paper, we examine the role of auxiliary information in SE across multiple datasets and various input conditions. Specifically, we compare the performance of two SE systems (audio-based and video-based) with SS using a unified framework that utilizes the commonly used dual-path recurrent neural network architecture. Experimental evaluation on various datasets demonstrates that the use of auxiliary information in the considered SE systems does not always lead to better extraction performance compared to the uninformed SS system. Furthermore, we offer new insights into how SE systems select the target speaker by analyzing their behavior when provided with different and distorted auxiliary information given the same mixture input.