A team of engineers and scientists at The Ohio State University believe they have made a breakthrough in a common problem for the hearing impaired: How to help them recognize speech in the midst of background noise.
Their findings, published in the Journal of the Acoustical Society of America, may pave the way for the next generation of digital hearing aids that could be built into smartphones, where the phone itself serves as the computer processor and sounds are sent wirelessly to tiny earplugs.
But while this form of smartphone/hearing aid technology is already being refined through the use of smart hearing apps, the OSU team's work is based on a new computer algorithm that quickly analyzes speech and removes much of the background noise. In a press release announcing the findings, DeLiang "Leon" Wang, professor of computer science and engineering and lead researcher of the study said, "For 50 years, researchers have tried to pull out the speech from the background noise. That hasn’t worked, so we decided to try a very different approach: classify the noisy speech and retain only the parts where speech dominates the noise."
Wang noted that his team has trained this algorithm to separate speech from background noise by exposing it to different words in the midst of background noise. While the initial study used prerecorded sounds, future studies will concentrate on real time speech set against normal background noise.
Continued work on the algorithm and more testing on human volunteers will be supported through a new $1.8 million grant from the National Institutes of Health.