fbpx

Our Technology Portfolio

Our audio processing technologies are the result of intensive research into the most effective approaches for extracting clean speech signals in complex acoustic environments. The result: truly innovative, patented and tested solutions that work in the real world.

These flexible software solutions power advanced audio experiences, making them a perfect fit for a wide range of applications, including hearing enhancement on smartphones and TWS earbuds, video recording and video calling.

Our aiso solutions leverage a combination of the following proprietary technologies:

Blind Source Separation

Acoustic Echo Cancellation

Asynchronous AEC

Low Latency Noise Suppression

Source Selection

Blind Source Separation

Blind Source Separation

Our proprietary Blind Source Separation technology is the foundation of our aiso™ solutions. Our Blind Source Separation leverages Bayesian statistical models to process multi-microphone audio signals and separates each source of sound with a consistent spatial location into its own distinct channel of audio.

  • Improves Signal-to-Interference Ratio (SIR) by up to 30 dB.
  • Is uniquely valuable when there are overlapping speech signals – a problem that cannot be addressed effectively with noise suppression technology alone.
  • Functions with very low latency, even meeting the demands of face-to-face communications in hearing enhancement applications.
Acoustic Echo Cancellation

Acoustic Echo Cancellation

Acoustic Echo Cancellation or AEC cancels loudspeaker signals from the audio captured by a device’s microphones, allowing you to communicate without being tripped up by echoes.

Our AEC technology:

  • Is designed to work in harmony with our audio source separation technology.
  • Offers superior cancellation of reverb tails and room reflections, even with multiple reference signals.
  • Is resilient to double-talk and supportive of full duplex performance.
  • Achieves Echo Reduction Loss Enhancement (ERLE) of up to 36 dB.

Our AEC technology can be configured in a variety of ways to optimise it for specific applications, including standard configuration, no reference signal or asynchronised signal.

Want to know more about aiso?

Contact
Asynchronous Acoustic Echo Cancellation

Asynchronous Acoustic Echo Cancellation

Modular, multi-device conferencing products have their microphones and speakers integrated into separate physical devices that use separate sampling clocks, for example:

  • Microphone samples being handled on a camera device with microphones.
  • Reference samples being handled on companion TV with speakers

Because the reference signal is created on one device, while the microphone signal is created separately on another device, this results in asynchronous signals.

While such asynchronous signals would pose a problem to standard AEC technologies, our AEC can be configured to be resilient to this, making it the ideal solution for modular, multidevice conferencing systems.

Low-Latency Noise Suppression

Low-Latency Noise Suppression

Noise Suppression technology reduces stationary and transient noise in speech signals, increasing the Signal-to-Noise Ratio and reducing listening fatigue.

One of the most popular configurations of our Noise Suppression technology functions with a very low latency of just 1 millisecond, meeting the strict latency demands of face-to-face communications for hearing enhancement applications.

The Low Latency configuration of our Noise Suppression is optimised to work in harmony with our audio source separation technology, enabling it to suppress not only ambient noise but also the residual noise from interfering sound sources.

Source Selection

Source Selection

Once our Blind Source Separation has separated out the sound sources, our source selection technology enables only the desired sound sources to be kept, while interfering sound sources are discarded.

Desired or ‘target’ sound sources can be selected in a number of ways:

  • Automatic selection of nearby voices, such as the voices of people sitting at the same table as the user in a busy restaurant.
  • Manual or automatic selection of desired sound sources based on their location in relation to a device.
  • Automatic selection of sound sources based on their location in relation to a device, such as the selection of all sound sources corresponding to the locations of faces detected by a face tracking system.
  • Automatic selection of sound sources based on a range of locations in relation to a device, such as the selection of all sound sources located within a camera’s field-of-view.

Get in touch for more information

Enter your details and we’ll be in touch with you shortly.

Please enter your first name
Please enter your last name
Please enter your company name
Please enter your email
Please select an area of interest
Please enter a message
To submit the form, you must allow us to process your data. We will never share your data.