Virtual reality technology opens new doors of (spatial) perception: Using immersive technology in the lab has enabled researchers to study sound perception in realistic settings
We rely on our ears to tell us where sounds — from the chirp of a bird to the call of your name in a crowd — are coming from. Locating and discriminating sound sources is extremely complex because the brain has to process spatial information from many, sometimes conflicting, cues. Using virtual reality and other immersive technologies, researchers are able to use new methods to investigate how we make sense of the word with sound.
“These technologies allow us to bring the real world into the lab and, ultimately, the lab into the real world,” said G. Christopher Stecker, associate professor of hearing and speech sciences at Vanderbilt University. Stecker uses these immersive tools to probe auditory spatial awareness in naturalistic, yet controlled settings. He foresees that these technologies will yield improved hearing aids, more accurate diagnoses of auditory disorders, and video games with richer sound experiences.
This 360-degree view helps us direct our attention to where sounds originate — crucial for navigating the environment to avoid danger, like detecting an oncoming car.
At the 175th Meeting of the Acoustical Society of America, held May 7-11, 2018, in Minneapolis, Minnesota, Stecker will survey his team’s use of virtual reality and augmented auditory reality to study how people use explicit and implicit sound cues.
In an ongoing study, subjects wear head-mounted displays, immersing them in a parklike setting, and are told to turn their heads in the direction they hear sound. In the background, doctoral student Travis Moore manipulates two essential locational cues. The first is a difference in time — measured in millionths of a second — when sound waves reach each ear. The other is the difference in sound pressure levels registered in each ear.
Consistent with earlier work, Moore has found considerable variability in how much weight subjects’ brains assign to each cue. “This is an important step because we really don’t know how this process of integrating two cues plays out in real-world listening tasks,” Stecker said.
Another ongoing study, meant to simulate a busy cocktail party, looks at how differences in acoustics and the resultant differences in sound qualities, echoes and reverberations influence spatial awareness. “In the ear, there’s a very clear representation of sound frequency and intensity, or loudness, but place has to be computed by the brain,” Stecker said. “The ear doesn’t know where things are. The brain figures it out.”
Stecker imagines that studies examining these more implicit aspects of spatial awareness could lead to augmented reality devices that remotely render realistic virtual versions of people. “Consider a chat with your grandmother,” Stecker said. “This kind of technology could make it look and sound as if she were sitting on the couch across from you. To achieve that on the sound side, we will need to make the acoustics of that simulation indistinguishable from the real world.”
Source: Read Full Article