What are hearing aids?
HA (Hearing Aids) are prosthetic devices designed to facilitate access to sound for individuals with hearing loss. There are different types of hearing prosthetics: besides HA, cochlear implants and bone-anchored devices are among the most common.
The choice of the right device for a patient depends on the type of hearing loss, which can be conductive (the mechanical parts of the ear that propagate the sound do not vibrate normally), sensorineural (the cells in the cochlea are partially/completely inactive or the acoustic nerve is not transmitting the electrical signals properly) or mixed (both the conditions). Some types of hearing disorders do not depend on the anatomy of the hearing pathway (e.g. related to neurological disorders) and cannot benefit from the use of prosthetics. Learn more about hearing loss and how to get help (in the UK) if you are concerned about your hearing. HA are the most widely used type of hearing prosthetics, as they can provide some benefit to the majority of patients. Read some statistical data about hearing loss provided by the World Health Organization. The model of HA that you can see in the picture is called Behind the Ear - since there are also models that can be placed entirely into the ear canal. Nowadays most of HA are digital, which means they have a DSP (Digital Sound Processing) module.
This type of devices (in the picture) typically mount 2 microphones each, and they exchange information with each other via bluetooth. They have an ADC (Analog to Digital Converter): the sound is sampled, digitally processed and then reconverted to audio for the user to hear. It is easy to modify the response of digital HA in a way that suits the needs of a specific patient; however, they do not restore hearing. These devices are aimed at making sounds (especially speech) more accessible, but HA users still experience many problems. Let's take a deeper look.
The most common problem for HA users is understanding speech in noisy conditions. Speech is hard to understand in noise for anybody - normal hearing subjects struggle too - but for users of hearing devices the problem is magnified. How so?
Humans have the remarkable ability to focus on the sound they want to hear by using spatial cues; the external structure of our ears help us do that. The sound HA users hear is conveyed in a different way: the signal is picked up by the microphones on the device (which is usually placed behind the pinna) and played back by a tiny loudspeaker directly into the ear canal - hence bypassing the external structure of the ear.
In order to understand this better, you may try and push your pinnas (the external part of the ear) with your fingers so that they stick to your head. Everything sounds different all of a sudden, doesn't it? It's more difficult to tell where sounds come from!
Hearing impaired subjects are typically affected by loudness recruitment, which means that soft sounds are not heard, while loud sounds are too loud - even painful. This translates into a reduced dynamic range, which HA try to address by heavily compressing the sound. However, this signal processing may unfortunately cause unpleasant distortions for the listener.
Moreover, patients tend also to present a reduced sensitivity to changes in frequency, which means that it might be difficult for them to tell similar sounds apart.
Lastly, it is worth pointing out that all of the signal processing that is performed by HA is done on the speech + noise mixture, as the devices don't have access to the signals separately. The audio output of HA is hence "blurred", as opposed to a movie or a music production that plays on your home theatre system, as audio tracks are processed separately in that case.
There are a few technical strategies that can be used to try and overcome the problem. Have you ever noticed this symbol in a public space (e.g. a bank, a conference hall, an elevator)? It means the place is equipped with Telecoil technology. In this case, the speech signal is picked up by a microphone and sent directly to the HA via magnetic induction.
Also, modern HA can be paired with a bluetooth mic, which can be placed closer the sound source of interest (e.g. a speaker). They can also be paired with the audio output of a smartphone or a computer, with a bluetooth adapter.
These solutions however are not always feasible or desirable, hence there is need to recover the signal from the noisy mixture at the receiver's end.
For this reason HA are equipped with several algorithms, such as voice activity detection, denoising, automatic scene estimation, acoustic beamforming. All these algorithms are known as speech enhancement, which refers to the retrieval of the original speech signal from a noisy mix.
Notwithstanding advances in technology, many users still complain scarce intelligibility of speech in noise, especially in case of multiple talkers. One of our research questions is: can there be another approach to the problem, when we deal with speech playback - i.e. when the speech comes from a device, not from a human?
This project has received funding from the EU's H2020 research and innovation programme under the MSCA GA 675324