fbpx

News

Reuters interview with AudioTelligence

18 June, 2019 – A technology breakthrough is set to end the frustration of shouting at your digital assistant device when it can’t hear you over your favourite music track. You’ll even be able to hear the conversation when a friend rings you from a noisy coffee shop. It’s all thanks to a revolutionary approach to the problem by experts at Cambridge start-up AudioTelligence.

The results speak for themselves. A typical smart speaker in the home will pick up 84 out of 100 items when a list is read in quiet conditions – but this drops to 22 items when background noise such as a washing machine is introduced. Using AudioTelligence software enables the device to recognise 94 items – even in a noisy environment.

Until now, the traditional approach to smart home assistants has involved several carefully calibrated microphones inside the speaker – with a mixture of echo cancellation, noise suppression and the signal processing technique of beamforming used to cancel out unwanted background noise. But there are drawbacks with this approach. Beamforming works best when you have perfect knowledge of the acoustic environment – but in the real world it is affected by echoes from loud music, for example. And although noise suppression technology will cancel out the sound from things like air-conditioning units, any speech at the same frequency will also be lost – making it difficult to understand what is being said.

What’s unique about the AudioTelligence solution is that it uses a statistical approach to solve the problem. Its data-driven blind audio signal separation technology is able to ‘listen’ to a real-world acoustic scene and identify the prominent sources of sound, including echoes and reflections. Sophisticated algorithms enable it to cancel out the unwanted interference that often makes effective speech recognition impossible. The system is also adaptive – so any new background noises that appear are also eliminated in real time.

“It’s autofocus for sound – an ‘acoustic lens’ that enables your home assistant, smart speaker or in-car voice control system to focus on what you are saying when there is a lot of background noise,” said Ken Roberts, CEO of AudioTelligence. “The technology can even cope with a Skype call in a busy coffee shop – ensuring the person on the other end can hear you clearly at all times.”

The AudioTelligence solution works with as few as four microphones and doesn’t need calibrating or training – so existing devices can be easily reconfigured with a software update. The algorithm has a latency of just five milliseconds – making it ideal for real-time applications – and it’s designed to obtain the best possible performance from the available processing power.

AudioTelligence is a spin-out from CEDAR Audio – a world leader in audio restoration, speech enhancement and audio forensic investigation. The start-up raised £3.1m last year from Cambridge Innovation Capital and Cambridge Enterprise to fund its ambitious growth plans and meet the demand for its technology.

“Our breakthrough means that at last your home assistant will be able to hear and understand you even when the TV is on and the children are screaming and shouting,” said Ken. “Although whether you will hear its response is another matter…”

Find out what happened when Reuters put the AudioTelligence technology to the test.

 


Resources

 

Access all areas: smartphones and assistive listening

Read more
 

AudioTelligence announces Orsana, the first universal assistive listening device

Read more