AUTHOR=Lorenz Corinna , Hao Xinyu , Tomka Tomas , Rüttimann Linus , Hahnloser Richard H.R. TITLE=Interactive extraction of diverse vocal units from a planar embedding without the need for prior sound segmentation JOURNAL=Frontiers in Bioinformatics VOLUME=2 YEAR=2023 URL=https://www.frontiersin.org/journals/bioinformatics/articles/10.3389/fbinf.2022.966066 DOI=10.3389/fbinf.2022.966066 ISSN=2673-7647 ABSTRACT=

Annotating and proofreading data sets of complex natural behaviors such as vocalizations are tedious tasks because instances of a given behavior need to be correctly segmented from background noise and must be classified with minimal false positive error rate. Low-dimensional embeddings have proven very useful for this task because they can provide a visual overview of a data set in which distinct behaviors appear in different clusters. However, low-dimensional embeddings introduce errors because they fail to preserve distances; and embeddings represent only objects of fixed dimensionality, which conflicts with vocalizations that have variable dimensions stemming from their variable durations. To mitigate these issues, we introduce a semi-supervised, analytical method for simultaneous segmentation and clustering of vocalizations. We define a given vocalization type by specifying pairs of high-density regions in the embedding plane of sound spectrograms, one region associated with vocalization onsets and the other with offsets. We demonstrate our two-neighborhood (2N) extraction method on the task of clustering adult zebra finch vocalizations embedded with UMAP. We show that 2N extraction allows the identification of short and long vocal renditions from continuous data streams without initially committing to a particular segmentation of the data. Also, 2N extraction achieves much lower false positive error rate than comparable approaches based on a single defining region. Along with our method, we present a graphical user interface (GUI) for visualizing and annotating data.