Decoding the hubbub

Sounds are just a series of different pitches. So how do our brains decode these into rich soundscapes?
03 April 2018

Interview with

Dr Jennifer Bizley, University College London

GIRL-HEADPHONES

Girl wearing headphones

Share

Once a sound has reached our brain, how does our brain work out what to do with all the different pitches? Georgia Mills found out from Jenny Bizley, of the UCL EarInstitiute...

Jenny - We have some clues about how the brain does this. We know, for example, that many natural sounds like people’s voices have a pitch associated with them, and sounds with a pitch tend to be harmonic. So that means that they have some fundamental frequency, the lowest frequency, which is what we think of as the pitch; for example the A string on a cello that’s 220 Hertz, but there'll also be energy at multiples of that so at 440, 660,880, and so on.

Georgia - It’s helps to imagine it as a ladder, where each step represents energy at a different frequency. We interpret it as one lovely tone, but really, it’s many layered over the top of each other. Our voices do this too.

珍妮,我们的大脑是意识到这种模式so it will be able to, essentially, associate sounds that have that, or sound components that have that harmonic structure, and group them together. We know that it does that because, if in the lab, we create a sound that is harmonic, but we mess around with it so that we take one of those particular harmonics and shift it a little bit in frequency; then perceptually, you’ll go from hearing a single note to hearing two distinct sound sources.

There are other cues that your brain can use, so sound components that come from the same source tend to change together in time so that they’ll get louder together and quieter together. They’ll also tend to come from the same place in space. So there are all of these hints that the brain can use to try and make a model of how the sounds that have arrived at the ear have existed in the world beforehand.

Georgia - Do we use any other senses when we’re trying to untangle this mess?

Jenny - We, without really realising it, integrate information across our senses all of the time; this is particularly true for vision and for hearing. There are specific examples you can think of where integrating visual information with what you hear can be helpful to you so, for example, our ability to localise a sound in space is really good. We can tell apart sounds that are about one degree different; that’s sort of the width of your thumb at arm's length. But vision is 20 times better than that so it makes sense that you integrate information about where you see something coming from with where you hear it coming from.

Another example of when it’s really helpful to be able to see what you’re trying to listen to is if you’re listening to a voice in a noisy situation, so in a restaurant or a bar. If you can see a speaker’s mouth movements; then that gives you additional information about what they’re saying. So the mouth movement that you make, for example, a “fu-” sound is very different from the mouth movement that you make for a “bu-” sound.

But at an even more lower level trying to tackle this problem of how when you’re faced with a really complicated sound mixture, you seperate it out into different sound sources; then actually, if you’re looking at a sound source so, for example, someone’s mouth, if you look at someone’s mouth you’re getting a rhythmical signal where the mouth gets wider as the voice gets louder, and smaller as the voice gets quieter. Even the way that someone’s perhaps moving their hands as they’re speaking is giving you another sort of rhythmical cue.

We’ve recently learned that that very basic information is enough to help you group together the sound elements that come from a sound source that’s changing at the same time as what you’re looking at, and that allows you to separate that sound source out from a mixture more effectively.

Georgia - You mentioned there separating out where a sound is coming from, so what do we know about how our brain works this out?

Jenny - In the auditory system, we have to rely on the fact that we have two ears, and the brain has to detect incredibly tiny differences in timing and sound level that occur between the two ears. If you have a sound, for example, to your right; then it’s going to hit your right ear slightly sooner than your left ear, and it’s going to be louder in that ear. And your brain is incredibly sensitive to these tiny differences in timing and sound level, we’re talking about fractions of a second here.

There’s a third ear that you can use which is that as the sound is funneled into your ear it passes through the pinna, which is the part of the ear that you see on the side of the head, and it has these complicated folds on it. As the sound comes in it’ll interact with those complicated folds in a way that depends on where the sound source comes from, and the effectively filter the sound and give characteristic notches in the frequency spectrum which give you information about where a sound source originated from.

Comments

Add a comment