AUTHOR=Diehl Peter Udo , Zilly Hannes , Sattler Felix , Singer Yosef , Kepp Kevin , Berry Mark , Hasemann Henning , Zippel Marlene , Kaya Müge , Meyer-Rachner Paul , Pudszuhn Annett , Hofmann Veit M. , Vormann Matthias , Sprengel Elias TITLE=Deep learning-based denoising streamed from mobile phones improves speech-in-noise understanding for hearing aid users JOURNAL=Frontiers in Medical Engineering VOLUME=1 YEAR=2023 URL=https://www.frontiersin.org/journals/medical-engineering/articles/10.3389/fmede.2023.1281904 DOI=10.3389/fmede.2023.1281904 ISSN=2813-687X ABSTRACT=

The hearing loss of almost half a billion people is commonly treated with hearing aids. However, current hearing aids often do not work well in real-world noisy environments. We present a deep learning based denoising system that runs in real time on iPhone 7 and Samsung Galaxy S10 (25 ms algorithmic latency). The denoised audio is streamed to the hearing aid, resulting in a total delay of around 65–75 ms, depending on the phone. In tests with hearing aid users having moderate to severe hearing loss, our denoising system improves audio across three tests: 1) listening for subjective audio ratings, 2) listening for objective speech intelligibility, and 3) live conversations in a noisy environment for subjective ratings. Subjective ratings increase by more than 40%, for both the listening test and the live conversation compared to a fitted hearing aid as a baseline. Speech reception thresholds, measuring speech understanding in noise, improve by 1.6 dB SRT. Ours is the first denoising system that is implemented on a mobile device, streamed directly to users’ hearing aids using only a single channel as audio input while improving user satisfaction on all tested aspects, including speech intelligibility. This includes overall preference of the denoised and streamed signal over the hearing aid, thereby accepting the higher latency for the significant improvement in speech understanding.