In his review of State of Play, David Denby laments the rise of incoherence as a filmmaking technique:
“State of Play,” which was directed by Kevin Macdonald, is both overstuffed and inconclusive. As is the fashion now, the filmmakers develop the narrative in tiny fragments. Something is hinted at–a relationship, a motive, an event in the past–then the movie rushes ahead and produces another fragment filled with hints, and then another. The filmmakers send dozens of clues into the air at once, but they feel no obligation to resolve what they tell us. Recent movies like “Syriana,” “Quantum of Solace,” and “Duplicity” are scripted and edited as overly intricate puzzles, and I’ve heard many people complain that the struggle to understand the plot becomes the principal experience of watching such films.
I quite enjoyed a number of those movies (Quantum of Solace is the notable exception – it was confusion masquerading as complexity), but I do think Denby makes an excellent point. Ever since Pulp Fiction, and certainly since The Usual Suspects, there’s been a segment of filmmakers that sees the movie as akin to a puzzle, an artistic form which should only make sense in the moments before the final credits start to roll. Instead of having our narrative understanding slowly build, these directors dole out comprehension in sudden spurts, when a crucial twist is suddenly revealed. The end result is that disbelief can’t be suspended because we’re too busy trying to figure out what the hell is going on.
Of course, ambiguity and uncertainty can be a nifty trick, especially as a counter to all the predictable crap turned out by Hollywood. (I’m looking at you Matthew Mcconaughey.) But it’s worth pointing out that such formal devices – e.g., the splicing of time, so that the end happens first – seem to contradict the essential state of movie-watching, which is total immersion in a flickering image. This, after all, is why people go to the movie theater: for release, for a 120 minutes of cognitive vacation.
Here’s the requisite scientific reference, which comes from a study led by Rafael Malach. The experiment was simple: he showed subjects a vintage Clint Eastwood movie (“The Good, The Bad and the Ugly”) and watched what happened to the cortex in a scanner. To make a long story short, he found that when adults were watching the film their brains showed a peculiar pattern of activity, which was virtually universal. (The title of the study is “Intersubject Synchronization of Cortical Activity During Natural Vision”.) In particular, people showed a remarkable level of similarity when it came to the activation of areas including the visual cortex (no surprise there), fusiform gyrus (it was turned on when the camera zoomed in on a face), areas related to the processing of touch (they were activated during scenes involving physical contact) and so on. Here’s the nut graf from the paper:
This strong intersubject correlation shows that, despite the completely free viewing of dynamical, complex scenes, individual brains “tick together” in synchronized spatiotemporal patterns when exposed to the same visual environment.
But it’s also worth pointing out which brain areas didn’t “tick together” in the movie theater. The most notable of these “non-synchronous” regions is the prefrontal cortex, an area associated with logic, deliberative analysis, and self-awareness. (It carries a hefty computational burden.) Subsequent work by Malach and colleagues has found that, when we’re engaged in intense “sensorimotor processing” – and nothing is more intense than staring at a massive screen with Dolby surround sound – we actually inhibit these prefrontal areas. The scientists argue that such “inactivation” allows us to lose ourself in the movie:
Our results show a clear segregation between regions engaged during self-related introspective processes and cortical regions involved in sensorimotor processing. Furthermore, self-related regions were inhibited during sensorimotor processing. Thus, the common idiom ”losing yourself in the act” receives here a clear neurophysiological underpinnings.
What does this have to do with tricky cinematic narratives? I’d argue that the constant confusion makes it harder for us to dissolve into the spectacle on screen. We’re so busy trying to understand the plot that our prefrontal cortex can’t turn off. To repeat: this isn’t necessarily a bad thing, but it does go against the fundamental experience of watching a movie. It’s a formal innovation that contradicts the essence of the form.* We can’t afford to “lose ourselves” in the movie because we’re already lost.
*In other words, it’s not so different than the post-modern novel, which is constantly calling attention to the fact that it’s only a text. Most novels, of course, try to make us forget that we’re just reading letters on a page. They want us immersed in the prose, not contemplating the unreliability of the narrator.