Our Technology Portfolio

Our audio processing technologies are the result of over 26 person-years of research into the most effective approaches for extracting clean speech signals in complex acoustic environments. Our flexible software only audio solutions can enable assistive listening capabilities on multi-microphone devices including smartphones. They are also ideal for real-time applications such as video recording and calling on smartphones, VoIP calling, conference systems and automotive infotainment.

Our aiso solutions leverage a combination of the following proprietary technologies:

Blind Source Separation

Acoustic Echo Cancellation

Asynchronous AEC

Low Latency Noise Suppression

Source Selection

Blind Source Separation

Blind Source Separation

Our proprietary Blind Source Separation technology is the foundation of our aiso™ solutions. Our Blind Source Separation leverages Bayesian statistical models to process multi-microphone audio signals and separates each source of sound with a consistent spatial location into its own distinct channel of audio.

  • Improves Signal-to-Interference Ratio (SIR) by up to 30 dB.
  • Is uniquely valuable when there are overlapping speech signals – a problem that cannot be addressed effectively with noise suppression technology alone.
  • Functions with very low latency, even meeting the demands of face-to-face communications in hearing assistance applications.
Acoustic Echo Cancellation

Acoustic Echo Cancellation

Acoustic Echo Cancellation or AEC cancels loudspeaker signals from the audio captured by a device’s microphones, allowing you to communicate without being tripped up by echoes.

Our AEC technology:

  • Is designed to work in harmony with our audio source separation technology.
  • Offers superior cancellation of reverb tails and room reflections, even with multiple reference signals.
  • Is resilient to double-talk and supportive of full duplex performance.
  • Achieves Echo Reduction Loss Enhancement (ERLE) of up to 36 dB.

Our AEC technology can be configured in a variety of ways to optimise it for specific applications, including standard configuration, no reference signal or asynchronised signal.

Want to know more about aiso?

Contact
Asynchronous Acoustic Echo Cancellation

Asynchronous Acoustic Echo Cancellation

Modular, multi-device conferencing products have their microphones and speakers integrated into separate physical devices that use separate sampling clocks, for example:

  • Microphone samples being handled on a camera device with microphones.
  • Reference samples being handled on companion TV with speakers

This results in the reference signal being created on one device, while the microphone signal is created separately on another device. This results in asynchronous signals.

While such asynchronous signals would pose a problem to standard AEC technologies, our AEC can be configured to be resilient to this, making it the ideal solution for modular, multidevice conferencing systems.

Low-Latency Noise Suppression

Low-Latency Noise Suppression

Noise Suppression technology reduces stationary and transient noise in speech signals, enabling it to increase the Signal-to-Noise Ratio and reduce listening fatigue.

One of the most popular configurations of our Noise Suppression technology functions with a very low latency of just 1 millisecond, meeting the strict latency demands of face-to-face communications for hearing assistance applications.

The Low Latency configuration of our Noise Suppression is optimised to work in harmony with our audio source separation technology, enabling it to suppress not only ambient noise but also the residual noise from interfering sound sources.

Source Selection

Source Selection

Once our Blind Source Separation has separated out the sound sources, our source selection technology enables just the desired sound sources to be kept, while interfering sound sources are discarded.

Desired or ‘target’ sound sources can be selected in a number of ways:

  • Automatic selection of nearby voices, such as the voices of people sitting at the same table as the user in a busy restaurant.
  • Manual or automatic selection of desired sound sources based on their location in relation to a device.
  • Automatic selection of sound sources based on their location in relation to a device, such as the selection of all sound sources corresponding to the locations of faces detected by a face tracking system.
  • Automatic selection of sound sources based on a range of locations in relation to a device, such as the selection of all sound sources located within a camera’s field-of-view.

Get in touch for more information

Enter your details and we’ll be in touch with you shortly.

Please enter your first name
Please enter your last name
Please enter your company name
Please enter your email
Please select an area of interest
Please enter a message
To submit the form, you must allow us to process your data. We will never share your data.