Over at Mind Hacks, Vaughan finds a fascinating review of recent books on the history of the senses. He highlights a short section of medieval theories of perception, which hypothesized that the eyes actively sent out "rays" that illuminated what we saw. (This view of visual sensation is what made the "evil eye" possible - people were infecting you with their sight.)
In 1492, learned debates also influenced how the world was perceived. As medical historians Nancy Siraisi and James T. McIlwain, also a neuroscientist, point out, medieval scholars would have located sensory perception in the brain (Siraisi, 1990; McIlwain, 2006). However, they would have perceived the five senses as active entities conveying information about the outside world to the internal senses of common sense, imagination, judgement, memory and fantasy (the ability to visualize).
Scholars differed considerably over how this worked in practice: for example, were rays emitted from the eyes towards the viewed object or was it the other way round? Either theory allowed for these rays to influence both viewer and object, thus explaining the widespread concept of the evil eye, or a belief still current in the 18th century that what a mother saw affected her foetus. The brain, however, was not the only sensitive organ of the body.
The heart was believed to be the centre of the animal soul, and thus closely associated with more carnal senses such as touch. The brain, the centre of the rational soul, was more closely associated with sight; the eyes often viewed as the 'windows of the soul'. Sight, therefore, was given pre-eminence in the pre-modern world as it is today, but often for spiritual reasons due to the inter-dependence of religion and rational knowledge (scientia).
Of course, such antiquated theories now seem ridiculous. But will the scientists of the future be any kinder to our own metaphors for perception? Ever since the invention of the photograph, in the mid-19th century, we've seen our visual system as a nifty biological camera. Photos passively pierce the eyeball, trigger rods and cones in the retina, and then that map of lit pixels gets "developed" by the brain. In other words, what we see is what is - our vision is an objective representation of the outside world.
If only it were so simple. Ever since the pioneering work of Hubel and Wiesel, we've known that the brain doesn't process pixels of light, like a camera. Instead, our neurons are strange things, fascinated by angles and edges, contrast not brightness. In their seminal 1959 paper "Receptive fields of single neurons in the cat's striate cortex," Hubel and Wiesel became the first scientists to describe reality as it appears to the early layers of the visual cortex, as just a mess of chromatic blocks.
What happens then? In order to make sense of this visual cacophony, the brain has to do what cameras don't: interpret the input. It has to parse all those lines and figure out which objects are where. As I've noted before, we now know that what we end up seeing is highly influenced by something called "top-down processing," a term that describes the way cortical brain layers project down and influence (corrupt, some might say) our actual sensation. After the inputs of the eye enter the brain, they are immediately sent along two separate pathways, one of which is fast and one of which is slow. The fast pathway quickly transmits a coarse and blurry picture to our prefrontal cortex. Meanwhile, the slow pathway takes a meandering route through the visual cortex, which begins meticulously analyzing and refining the lines of light. The slow image arrives in the prefrontal cortex about 50 milliseconds after the fast image.
Why does our mind see everything twice? Because our visual cortex needs help. After the prefrontal cortex receives its imprecise picture, the "top" of our brain quickly decides what the "bottom" has seen, and begins slyly doctoring the sensory data. (It's somewhat akin to tweaking a photo in photoshop...) Form is imposed onto the formless rubble of the V1. If these interpretations are removed, our reality becomes unrecognizable. (See, for instance, the wonderful case history by Oliver Sacks, "The Man Who Mistook His Wife for a Hat.")
The moral, of course, is that our brain is nothing like a camera. There is nothing passive about perception. (Of course, our eyes don't emit rays either, but at least that medieval metaphor captures the active nature of the sensory process.) I imagine the scientists of the 22nd century laughing at our meager technological analogies, just as we laugh at the analogies of the past.
- Log in to post comments
In light of the recent Alva Noë´s book release, i like his 21st century metaphor:
"Perception is not something that happens inside us but something we do"
I am interested in how photographic visual information is perceived in comparison to the perception of the "real" object/scene/person. For example, seeing a person as opposed to seeing a photograph of the person and how those are encoded differently or similarly. I find the phenomenon of the association of photographs, even digital photographs, with the "real" extremely intriguing, and would appreciate any direction towards research or articles relating to this topic.
I intended to put you one little word just to thank you very much as before for the lovely principles you have shown on this page. It has been wonderfully open-handed of you to offer easily what exactly a lot of people would have marketed for an electronic book to help make some profit on their own, certainly considering that you could have tried it if you ever wanted. Those concepts likewise worked to become a fantastic way to realize that other people have a similar eagerness just as my very own to see great deal more when it comes to this condition. I'm sure there are a lot more pleasant times in the future for many who discover your site.