Using AI To Make Hearing Aids Better

Hearing loss can be debilitating and significantly hinder the life of the individual.  One of the key challenges is in distinguishing voices in especially noise environments.

A Danish team believe they may have come up with a solution, with AI being deployed to both recognize and separate voices, even in unknown sound environments.  The work, which was documented in a recently published paper, aims to improve the ability of hearing aids to process sounds, even in unknown environments.

“When the scenario is known in advance, as in certain clinical test setups, existing algorithms can already beat human performance when it comes to recognising and distinguishing speakers. However, in normal listening situations without any prior knowledge, the human auditory brain remains the best machine,” the team explain.

Helping to hear

The project was conducted in two chunks, the first of which was to solve the challenge of conducting a one-to-one conversation in a noisy environment, such as on a train.  To tackle this, the team developed an algorithm that can amplify the sound of the individual speaker, whilst also dampening outside noise, even without any prior knowledge of the specific situation.

“Current hearing aids are pre-programmed for a number of different situations, but in real life, the environment is constantly changing and requires a hearing aid that is able to read the specific situation instantly,” the team explain.

The second part of the research then focused on speech separation, which is vital when there are multiple people speaking, as in a group scenario such as a family meal.  In this scenario, the team developed an algorithm that could accurately distinguish each of the separate voices, whilst still dampening outside noise.

The team believe that their deep learning based approach provides unique benefits in being able to distinguish noise to be dampened from voices to be amplified.  What’s more, it can do this even in unfamiliar environments.

“The power of deep learning comes from its hierarchical structure that is capable of transforming noisy or mixed voice signals into clean or separated voices through layer-by-layer processing. The widespread use of deep learning today is due to three major factors: ever-increasing computation power, increasing amount of big data for training algorithms and novel methods for training deep neural networks,” they explain.

Taking to market

Of course, it’s one thing getting the technology working in a lab environment, but quite another to make it fit for market.  At the moment, the technology is far too big to be worn by a user, and so the next challenge is to make the algorithm efficient enough so that it can be worn behind the ear, as existing hearing aids are.  They are confident that these challenges are all surmountable however.

Settings with many people, such as a party, are the biggest challenge.  People with normal hearing are usually able to focus on the specific person of interest and shut out all background noise.  The so called cocktail party phenomenon is one that has generated a lot of interest from the research community to try and understand how the brain achieves this.  The researchers believe that their work gives us another step towards that goal.

“You sometimes hear that the cocktail party problem has been solved. This is not yet the case. If the environment and voices are completely unknown, which is often the case in the real world, current technology simply cannot match the human brain which works extremely well in unknown environments. But Morten’s algorithm is a major step toward getting machines to function and help people with normal hearing and those with hearing loss in such environments,” the researchers explain.

It’s a fascinating project, and you can see the technology in action via the video below.

Facebooktwitterredditpinterestlinkedinmail