ClearBuds is one of the first machine-learning systems to operate in real time and run on a smartphone. Wear both earbuds to get the best noise suppression experience.
ClearBuds is one of the first machine-learning systems to operate in real-time that uses a novel microphone system and runs on a smartphone. As meetings shifted online during the COVID-19 lockdown, many people found that chattering roommates, garbage trucks and other loud sounds disrupted important conversations.
This experience inspired three University of Washington researchers, who were roommates during the pandemic, to develop better earbuds. ClearBuds enhance the speaker’s voice and reduce background noise.
The researchers presented this project at the ACM International Conference on Mobile Systems, Applications, and Services.
What is the Difference Between ClearBud and Earbud?
"ClearBuds differentiate themselves from other wireless earbuds in two key ways," said co-lead author Maruchi Kim, a doctoral student in the Paul G. Allen School of Computer Science & Engineering. "First, ClearBuds uses a dual microphone array. Microphones in each earbud create two synchronized audio streams that provide information and allow us to spatially separate sounds coming from different directions with higher resolution. Second, the lightweight neural network further enhances the speaker’s voice."While most commercial earbuds also have microphones on each earbud, only one earbud is actively sending audio to a phone at a time. With ClearBuds, each earbud sends a stream of audio to the phone. The researchers designed Bluetooth networking protocols to allow these streams to be synchronized within 70 microseconds of each other.
The team’s neural network algorithm runs on the phone to process the audio streams. First, it suppresses any non-voice sounds. And then it isolates and enhances any noise that’s coming in at the same time from both earbuds — the speaker’s voice.
Advertisement
When the researchers compared ClearBuds with Apple AirPods Pro, ClearBuds performed better, achieving a higher signal-to-distortion ratio across all tests.
Advertisement
The team also tested ClearBuds "in the wild," by recording eight people reading from Project Gutenberg in noisy environments, such as a coffee shop or on a busy street.
The researchers then had 37 people rate 10- to 60-second clips of these recordings.
Participants rated clips that were processed through ClearBuds’ neural network as having the best noise suppression and the best overall listening experience.
One limitation of ClearBuds is that people have to wear both earbuds to get the noise suppression experience, the researchers said.
But the real-time communication system developed here can be useful for a variety of other applications, the team said, including smart-home speakers, tracking robot locations or search and rescue missions.
The team is currently working on making the neural network algorithms even more efficient so that they can run on the earbuds.
Additional co-authors are Ira Kemelmacher-Shlizerman, an Associate Professor in the Allen School; Shwetak Patel, a Professor in both the Allen School and the Electrical and Computer Engineering Department; and Shyam Gollakota and Steven Seitz, both Professors in the Allen School. This research was funded by The National Science Foundation and the University of Washington’s Reality Lab.
Source-Eurekalert