Improving speech playback for hearing aid users

A tougher challenge is represented by making speech more intelligible for users of hearing prosthetics. Read more about hearing aids and the difficulties users face in their everyday lives here. The goal of my research network (ENRICH) is to create technologies that facilitate communication among individuals, with special attention to different abilities in speech and hearing.
In 2019 I started a study at Hörzentrum Oldenburg, Germany, with a selected pool of hearing aid users. We measured the percentage of words participants could understand when speech was being played against noise (the same noisy environments we used in this study), and then we measured the percentage they could understand when we modified the speech signals with our selected algorithms. The study is not finished yet, but the preliminary results are very promising.
In order to run the listening test, we needed the participants to wear headphones - as we presented binaural simulations of realistic environments, and they only work if administered via headphones. This is quite a technical challenge: how can we use headphones and hearing aids at the same time? We solved the problem by using the openMHA, a computer-based simulation of hearing aids, which was deployed as the last processing block in our audio chain. The openMHA provides amplification and compression, and can be fit to a specific listener just like any hearing aid. It is an open platform for research and can be downloaded here.
The poster we presented at SPIN 2020 in Toulouse, France, can be seen here.


This project has received funding from the EU's H2020 research and innovation programme under the MSCA GA 675324