Speaking of the Philadelphia science museum, they have here a fun interactive experiment:
Here's what they describe as "the science behind it":
In this activity, you hear a sentence that has been distorted by a computer. The sounds are muddled and squeaky, and it is tricky to understand what is being said. Then you hear the same sentence without distortion, for instance: “There’s coffee on that seat.” A woman’s voice speaks clearly, making it easy to understand.
Surprisingly, when you go back and listen to the distorted sentence again, you can understand the muddled words. Your brain is now using existing knowledge – from the clear sentence that you heard – to interpret that information. Once your brain knows what is being said, it applies the information and makes sense of the distorted sound.
Sometimes, your brain takes in sensory data layer by layer to piece together what’s happening. This is called “bottom-up processing.” More often - as is happening in this activity - your brain operates in a “top-down” manner, making predictions based on previous knowledge. This saves time, but because each person’s prior experiences are unique, top-down processing can cause people to perceive the same information in different ways.
Does it apply to how we experience music on our audio systems? Why not?
Here's what they describe as "the science behind it":
In this activity, you hear a sentence that has been distorted by a computer. The sounds are muddled and squeaky, and it is tricky to understand what is being said. Then you hear the same sentence without distortion, for instance: “There’s coffee on that seat.” A woman’s voice speaks clearly, making it easy to understand.
Surprisingly, when you go back and listen to the distorted sentence again, you can understand the muddled words. Your brain is now using existing knowledge – from the clear sentence that you heard – to interpret that information. Once your brain knows what is being said, it applies the information and makes sense of the distorted sound.
Sometimes, your brain takes in sensory data layer by layer to piece together what’s happening. This is called “bottom-up processing.” More often - as is happening in this activity - your brain operates in a “top-down” manner, making predictions based on previous knowledge. This saves time, but because each person’s prior experiences are unique, top-down processing can cause people to perceive the same information in different ways.
Does it apply to how we experience music on our audio systems? Why not?
Last edited: