Why Rated-G isn't an excuse for poor craftsmanship

image

Having just re-watched Wallace and Gromit and the Curse of the Were-Rabbit, I was reminded that rated G films and programs do not need to entail idiotic gags that cater to the stupidest of stupid. Like Sesame Street, Classic Disney and more recently Pixar, Nick Park and Aardman Animation have proven again and again that just because you’re aiming for a G-rating doesn’t mean your jokes can’t be any less intelligent, the scares any less horrific, or the innuendos any less implied – it’s all in good taste, of course.

Why is Wallace and Gromit such a gem of a canon? Quite simply it makes no pretense of being anything more or less than it is – a lovely and adventure-filled universe with the ever inventing Wallace and the master of silent acting Gromit. As I’ve said before¹ Nick Park’s Wallace and Gromit universe is undeniably sweet and unimposing in its presentation, entirely grounded in the reality of the mundane amidst the spectacular. We see Wallace’s outrageous inventions, and he acts as if they’re perfectly normal even when they malfunction (the most he’ll do is shake his hands back and forth in front of his face and go “oh dear…!” if it gets really bad); and as per usual, good old Gromit is there to pick up the pieces after something goes haywire, or if Wallace simply wants some cheese and crackers. This is a universe untainted by true malice, the most negative of feelings arising solely from somebody after something not uncommon, and in the most hilarious manner as well (Victor and his toupee? Check!)

Throughout the entire movie I was laughing hard, amazed (and still am!) by the cleverness and subtle jokes Park and his production team snuck in that makes both children and adults will laugh at, and for very different reasons. This is what a good movie is all about!

So why is it that so many children’s films these days seem so content to sit and stagnate in a pool of screenplay mediocrity? It’s almost as if a majority of animation studios took a cue from Disney Channel, focusing on a false extraordinaire and always on the seeming verge of being prescribed Ritalin. Everyone wants to be Hannah Montana, shiny and bright and in-your-face fun! There needs to be drama, action, something to drive everyone on screen to constantly on a energy high as if the world will stop spinning if they’re not displaying the effects of a caffeine overdose. No one seems to be content with quieter aspects of childhood, the moments where we find a little caterpillar walking on slowly by and begin imagining just what it might be up to.

image

What makes Wallace and Gromit so exemplary is in addition to the series outright rejecting the A.D.D. mentality that is endemic to most recent American children’s productions, it is inherently good-natured and calm. No one gets too excited, too sad, too angry, too – well, anything. The characters take it all in stride – from being dragged underground by a giant rabbit to two dogs having a aerial dogfight, or even the fact that everyone has over-the-top security for their precious vegetables – nothing quite seems to push anyone into the realm of clichéd or hopeless desperation: we all know it’s going to be all right in the end, just you watch! (also, could you make some tea and crackers while you’re at it?)

The narrative structure of Wallace and Gromit is one of adventure, balancing a Looney Tune’s cartoon spectacle with characters who at most will say “oh dear” when something goes haywire. There are feats only imaginable in the realm of animation (stop-motion animation, for the matter) that defy logic so unapologetically without so much of a wink – well, you’ll just have to accept that Wallace insists on contraptions and machines to perform the most menial of tasks, and that two dogs will take a moment to shuffle through their coin pouches while they’re fighting over a plane ride, because at the end of the day you know they’ll all be sitting down for cheese and crackers.

The only other director-animators I know who are completely at peace with being unspectacular are Hayao Miyazaki and Isao Takahata of Studio Ghibli, and Sylvain Chomet who made The Triplets of Belleville and the upcoming The Illusionist. There’s a distinct calmness to Miyazaki’s, Takahata’s and Chomet’s works that refuses to indulge in instant anything, and refuses even more to cue us into how we should be feeling. The musical scores are not composed to elicit a specific emotion, but beautifully supplement what happens on screen at a given time; there are, at times, when minimalism and silence are the most important sounds for a given scene. Nick Park takes this distinct calmness a step further by framing something otherwise amazing and fantastic into something almost regular, expected and humorously mundane even. Park’s directing technique is not dissimilar to that of Joel and Ethan Coen’s, especially in the duo’s exemplary Fargo in which a intense kidnapping and series of gruesome murders tied to a extensive extortion scheme are somehow hilarious and really, really uncool (the word “yeahhh” will never be the same for me again). 

Rated G doesn’t need to be dumbed down. Classic Disney films like Pinocchio and Snow White are as rated G as they come, and there are still scenes that I find traumatizing and disturbing (to date, the donkey transformation scene in Pinocchio still gives me chills)². Pixar has inexplicably established themselves as an institution open to challenge and change³, where the creative process is inherently tied to feedback, feedback, and more feedback. Even Sesame Street has demonstrated exemplary intelligence in broadcasting to primarily children: the PBS programming has repeatedly shown themselves to be updated, intelligent, and sensible in communicating sociocultural, political, and even pop culture aspects with children (the Old Spice spoof⁴ is one of my favorites parodies to date).

I can only hope this sugar-high mentality that many recent children’s films will have at least diluted a bit by the time I end up raising a kid, and hope even more that this stupid 3D enthusiasm will have kicked the bucket. We need smarter, better writers for children, writers who are sensitive enough to know what strikes a chord in both kids and adults alike, and timelessly so. In television, we’ve had Sesame Street, Animaniacs, Foster’s Home for Imaginary Friends (Craig McCracken), and old Spongebob Squarepants (when Stehen Hillenburg was still involved); in film, we’ve got classic Disney, Pixar, Sylvain Chomet, Hayao Miyazaki, Isao Takahata, Henry Selick and Nick Park; in books, we’ve got Lewis Carroll, Beatrix Potter, J.M. Barrie, Dr. Seuss, Roald Dahl, Beverly Cleary, Avi, and now J.K. Rowling – so, anyone else up for the challenge?

image

Referenced and Recommended Reading/Links

¹The Simple Sweetness and Sincerity of Wallace and Gromit

²The Mythology of Classic Disney

³”It Gets Better,” Love Pixar

Smell Like a Monster

• Wallace and Gromit and the Curse of the Were-Rabbit – review by Roger Ebert

• Cookie Monster audition tape for SNL

• Ricky Gervais and Elmo – off camera bantering

The Kid’s are All Right: Ramona and Beezus – by Dennis Cozzalio

Be Human

To be real is to be mortal; to be human is to love, to dream and to perish.

- A. O. Scott

There’s been a lot of buzz on the internet lately post-The Social Network about whether or not the internet is a bane to our humanity. Enthusiast say it allows connection beyond physical limits, and that it is democracy in the dark; detractors say it allows us to release innate bestial behavior that we’d otherwise control in the physical world, and that it’s a endless sea of voices dumbing down one another. 

After some mulling over time, I realized almost no argument defined one very important term – what is humanity, and what is it to be human? 

This thought popped up after reading Richard Brody’s counterargument¹ to Zadie Smith’s Generation Why?² piece. Having read Zadie first before Brody, I could see why he takes so many issues with the Zadie’s seeming diatribe, who writes:  

How long is a generation these days? I must be in Mark Zuckerberg’s generation—there are only nine years between us—but somehow it doesn’t feel that way… I often worry that my idea of personhood is nostalgic, irrational, inaccurate. Perhaps Generation Facebook have built their virtual mansions in good faith, in order to house the People 2.0 they genuinely are, and if I feel uncomfortable within them it is because I am stuck at Person 1.0.

Brody then lambasts Smith on this angle, saying: 

The distinction between “People 1.0” and “People 2.0” is a menacingly ironic metaphor: following Smith’s suggestion that those in the Facebook generation have lost something essential to humanity, it turns “People 2.0” into, in effect, “People 0.0”—inhuman, subhuman, nonhuman. That judgment is not merely condescending and hostile, it’s indecent.

On this level I couldn’t agree more with Brody, not only because the 1.0 vs 2.0 distinction arrogant and subjective – where’s the cut off to begin with? – but Smith falls upon the sword by failing to define what she considers human to begin with. To be fair, neither has Brody, but his response is a critical one, and rightfully so. 

Perhaps Smith didn’t intend for the 1.0/2.0 distinction to be interpreted this way; maybe she simply meant our brains are wired differently. According to Nicholas Carr’s The Internet Makes Deep Thought Difficult, if Not Impossible³, 

The Net delivers precisely the sensory and cognitive stimuli – repetitive, intensive, interactive, addictive – that result in strong and rapid alterations in the brain cell

Regardless, the very notion that somehow the internet age generation is “less human” because of how we operate or even how our brains are wired is a condescending one, conservative even. It implies some sort of overarching normality and human ideal that, frankly, is not universally applicable to how people operate on a day-to-day basis. To claim that those engaging on the net are “less human” implies that we cannot exist as distinct entities if we are not tied to our physical forms, period. 

I’ve written before⁴ how much I disagree with this assessment, primarily because the body-existence relationship theory implies you somehow become “less human” if certain bodily functions cease to exist (e.g. unable to eat/drink/talk, lost of appendages). An existence is influenced and defined to an extent by the environment, but it has been proven repeatedly over and over again that this existence can be extended beyond the shell of a body it occupies. The very action of writing down anything is already extension of someone’s existence: books were one of the earliest gateways to a alternative reality, and television, film, and other media followed suit. The internet is no different. 

To be fair on Smith’s end, she makes a good point about how the internet has become less personal in some respects: 

When a human being becomes a set of data on a website like Facebook, he or she is reduced. Everything shrinks. Individual character. Friendship. Language. Sensibility.

Brody rightfully criticizes her for this very phrase, but let’s put her rather negative parsing into this context: remember the earlier days everyone was sort of building their own websites, their own way online journaling via whatever they could? Before wikipedia, YouTube, hell even Google – we were all trying to build our own realm, experimenting with different media and limits of the internet at the time. I remember my friend Dominique once showed me some clips of Pokemon she’d managed to upload onto her dot-com site, and how much that impressed me; over ten years later and we’re posting minutes of clips on YouTube, no problem. I used Geocities (RIP) to create a online portfolio of writing, drawings, and even a journal; now there’s flickr, tumblr, more platforms than I could’ve ever imagined. There are now distinct hubs of websites, many long established during the dot-com bubble – Amazon for shopping, Photobucket for photos, and Google for pretty much everything now these days. 

There is of course some weight to what Smith says here: there have been multiple psychological tests and research done to see how much empathy people can show for one another depending on the degree of separation. For instance, in light of the Holocaust, the elephant in the room question was this – how could ordinary people enlisted in the Nazi regime even perform such atrocities? It turns out that multiple things were at play: primarily the threat of death and harm to their family kept many members in check, regardless of their personal thoughts and philosophies on the idea of subjugating Jews to such depravity and genocide; it also turned out that simply by not seeing someone you are about to harm or even kill, people are more likely to follow orders (i.e. “the person in the other room is a prime subject, so you need to keep pushing the electrify button every five minutes or every time they deny they are a liar”) because of the physical disconnect from the person they are engaging with. This explains why we get so many trolls and inflammatory remarks on the internet, which is a sorrow symptom of a media that allows us to be physically disconnected from actually seeing the person we may be engaging with. If this is the angle Smith meant when she said “we are reduced,” then yes – there is the potential for us to be less civil and collected given the chance to simply detach ourselves from physical sociocultural restraints. However, Smith goes on to say that the very existence of the net makes us less human in terms of individual character – all which I so fundamentally disagree with because like Brody, I find it arrogant and hostile. It’s one thing to say a degree of separation enables us to be more belligerent and in poor taste (even bestial, with the recent surge of cyber bullying), but it is another to claim no good can come of the media either, and that to subscribe to the net is to subscribe to an existence of hallow emptiness. 

It’s true that institutionalizing websites has made the internet less personalized, and that to subscribe to such a categorical organization is giving up certain programming and web designing idiosyncrasies otherwise expressed outside the determined parameters of a search engine, mail client, or social networking tool. Smith’s wording, however, is much too heavy for an assertion without a real foundation: we’re still ourselves on the net, and while we may share similarities to one another these similarities do not detract away from how we are as individuals. Standardization, albeit gray-scaling things down a bit, allows us to connect more easily to others, and then it’s possible to use such a connection as a segway to another piece of the internet that is perhaps more personal to ourselves, and ourselves alone. 

A perfect analogy for how the internet has transitioned is that of the radio. When it began everyone was for themselves: you played with radio waves, made your own messages, relayed back and forth between different frequencies – it was a free-for-all. However, as talkers began to be established and distinct stations became in place, this free-for-all diminished and everyone began subscribing to different programs of their liking. 

If we were to go by Smith’s standards, the radio equally made everyone “less human” than their predecessors once stations began their broadcast domination over people from playing around with frequencies and wavelengths. With this additional perspective, I couldn’t agree more with Brody’s response: 

Smith’s piece is just another riff on the newly resurgent “what’s wrong with kids” meme that surges up from the recently young who, in the face of rapid change, may suddenly feel old… Democratization of culture implies democratization of influence. Biological age, years of study, and formal markers of achievement aren’t badges of authority any more; they haven’t been for a long time. In school, in universities, they do remain so, and Facebook arose in response to, even in rebellion against, the social structures of university life that were invented for students by their elders the administrators on the basis of generations’ worth of sedimented tradition. Who can blame Zuckerberg for trying to reorganize university life on his own terms—and who can blame those many young people, at Harvard and elsewhere, who found that his perspective matched, to some significant extent, their own world view and desires? And if Facebook also caught on even among us elders, maybe it’s because—in the wake of the exertions of the late-sixties generation—in some way we are now, today, all somehow still in a healthy state of struggle with institutions and traditions.

No matter how you look at it, the internet truly is democracy in the dark. Democracy does not necessarily mean people do or believe in the positive – Apartheid is a perfect example of democracy in the negative – but it is simply an avenue for voices to be heard, and for the truly strong, competent and remarkable individuals to shine even brighter amongst the screams and laughter of the net. To claim that the internet somehow makes us dumber⁵ ignores the very external institutions that have collapsed so badly (the American education system is riddled with so many problems that I’m astounded still why the federal government insists upon school curriculum determined by the state), and ignores the fact that the internet, no matter how you look at it, is an extreme representation of our very selves which have inherently been shaped by the very institutions and policies formed in the non-virtual world. To claim also that media somehow makes us “less human” (made famously by Chuck Klosterman in this interview⁶) is, like Brody says, incredibly inappropriate and condescending, and again ignores a grave, fundamental definition – what does it mean to be human in the first place? 

Lastly, to add my two cents: not once does she directly define what it is to be “human,” claiming insofar that people 1.0, “real people,” are simply not people 2.0, “the Zuckerberg generation.” This is where and why Smith’s argument (and rant, for the matter) falls apart from the very beginning, all because she chooses to define terms in opposition rather than by definite terms. Defining in opposition is all relative: if you say “I’m against those who are sexually deprived,” you’re not defining what sexual depravation is, and merely implying some subjective grouping of those you consider “sexually depraved”; conversely, if you directly say “I’m against the philosophy of free love because I believe in monogamy,” the terms of “sexual depravity” are more clearly defined, and while these parameters are subjective there is at least a constant the argument can fall back onto. 

So to you, Professor Smith: to be human is to have irrational desire to love and be loved, regardless of how these emotion, illogical actions may entail for our physical well-being. There is a universal sense of our mortality and the gateway to death that awaits us at the undefined chronological end. So no matter how much the world changes or what new innovations happen to come our way, or even how our behaviors may fluctuated with pulsating sociocultural upheavals and revisions humanity will always be present so long as we have a sense of illogic in dealing with the world around us. I am not 2.0 person as you are not a 1.0 person: we are both in the same game, only I do not crave the nostalgia and older institution you are more familiar and comfortable with. Let us all agree that the change is one of the few constants in life, all in addition to life and death. 

Irrational love defeats life’s programming. 

– Andrew Stanton


Referenced Reading: 

¹Plus on Change – by Richard Brody

²Generation Why? – by Zadie Smith

³The Internet Makes Deep Thought Difficult, if Not Impossible – by Nicholas Carr, published in the magazine “Discover Presents: The Brain,” published Fall 2010

Ghosting

Dumb and Getting Dumberby Janice Kennedy

The Chuck Klosterman Interview Part 2 – conducted by Hunter Stephenson of /Film

What's in an adaptation?

Having recently watched Harry Potter and the Deathly Hallows: Part 1 twice, I sat down to think about some key aspects in a movie adaptation from a book. The visuals were spectacular as expected – director David Yates has become well established in the Potter franchise for dazzling us with colorful action and beautiful landscapes – but in the end, are the cuts to J.K. Rowling’s narrative worth the lack of cohesiveness for those unfamiliar with the world of Hogwarts? 

I believe a literal, page-by-page adaptation of a book (or any non-film medium for the matter) is an atrocity to film. The nature of a narrative is specific to its medium, and to transcribe it into another medium requires a understanding of the source and adaptive mediums. For books, it is all about the individual reader’s imagination, and how the words on each page convey a image just descriptive enough to visualize but just enough that as the reader, we can impress upon our own ideas about what is handsome and ugly, good or bad. For film, it’s all about composition, sound, and story: within each frame characters are placed for specific presentation purposes, music composed for different effects, and a core story that ties it all together into one cohesive movie. With film, our imaginations may not be an active player, but our emotions are in full bloom with each sensation of sound and color. An adaptation must consider medium differences if it wishes to be successful, and the filmmaker must understand that it is their job to offer something new, something intriguing about the same story that cannot be offered from reading alone – all while maintaining a level of cohesion. 

That said, we can juxtapose director Christopher Columbus (Harry Potter and the Sorcerer’s Stone and Harry Potter and the Chamber of Secrets) with David Yates (all the Harry Potter films since Harry Potter and the Order of the Phoenix) since both directors fall short of the criteria I’ve written out. In this case, Columbus and Yates demonstrate the shortcomings of lacking vision and lacking cohesiveness, respectively. 

When I first saw Harry Potter and the Sorcerer’s Stone nine years ago (yikes! I’m getting old), I was severely disappointed not by the special effects (though at the time some of them were at best mediocre) nor the actors they chose, but simply because the film itself felt lifeless compared to the experience of reading the original novel. This was entirely due to Columbus’ insistence on writing and filming every detail from the book. Every. Bloody. Thing. 

This wouldn’t be such a bad thing if not for the expositions that otherwise in print, were boring and on screen catered to audience members absolutely incapable of inference or simple connect-the-dots. Yes, every literal aspect of the book was included, but what was sorely missing was the creativity in driving these details to life on the big screen. When reading the books, Potter fans have the luxury of imagining what Hogwarts could not only look like, but feel and smell like – the sort of personal, mental improv literary lovers engage in. With Columbus, who wanted to remain “pure” to the books, there simply isn’t any of this whimsy or emotion incorporated into the two movies he directed (the second one less so than the first, but still sorely lacking from its true potential). For Potter fans, his films are a bore; for other moviegoers, they might be cute, but nothing spectacular. 

On the polar opposite end is David Yates, who has taken enormous liberty with Rowling’s narrative to the point of making the last three (and possibly four, with Deathly Hallows Part 2) almost exclusive to Potter fans, and effectively incomprehensible to others (and even veteran book readers, myself included). No doubt his films look fantastic: Yates has never failed to deliver on fantastic visuals, and exciting feats conjured up in the realm of magic, good and evil. However, I suspect Yates decided from the beginning that he wanted his interpretation of Harry Potter to be exciting!, spectacular!, and phantasmic!  – which, quite simply, can only explain why he consistently chooses to omit key detail and streamline a rather extensive plot thread into a goal-oriented fantasy run. 

I have the fortune of being hazily familiar with the Harry Potter series, just enough that I can remember certain “big” events happening (horcruxes anyone?) Still, Yates’ retelling of Rowling’s tale has confused me on numerous occasions, boiling down to things simply happening because they did and they can and that’s how the story goes. I appreciate his artistic additions to the series (in The Order of the Phoenix, the battle sequence between Dumbledore and Voldemort entails a beautiful sequence of red, green and blue, all incorporated into various elements of nature and industrial constructions), but there’s also a dire need for narrative cohesiveness if you want non-Potter moviegoers to piece a and b together, and so on. Frankly, I can’t for the life of me remember anything narratively significant except for the big points (I won’t spoil them here) and that there are big shiny fights every once in awhile. In The Deathly Hallows Part 1, I simply gave up trying to recall how things happened in the first place (“why is Harry holding that piece of mirror? And why did Dobby conveniently pop up at the best time?”) and had to consult friends and Wikipedia to clarify some key terms. Call me a non-Potter fan (I prefer to label myself as ambivalent), but if a movie based off a series I’m familiar with becomes effectively incomprehensible (“why don’t they just apparate away from those snatchers?”), I think there’s definitely a basic problem with book-to-script adaptation. 

The best director of the Potter series is Alfonso Cuaron, hands down. With The Prisoner of Azkaban, Cuaron not only adapted the script appropriately to the story’s increasingly dark fold (Rowling’s third book is my favorite in the series because of this), but drastically departed away from Columbus’ antiseptic vision, incorporating a grayer color palette that emphasized moments of brightness (and blood), as well as giving Harry, Ron and Hermione the dignity of not having to be in their robes 24/7. The movie is certainly far from perfect, but Cuaron’s departure from his docile directing predecessor was a breath of fresh air, the Harry Potter movie fans and moviegoers had been waiting for. The narrative is comprehensive, the artistry and creative liberty is apparent and in vein with Cuaron’s style (people have joked that Cuaron so drastically changed the landscape of Hogwarts that somehow, in the acres of Hogwarts, a massive earthquake took place to elevate the school 10 feet without anybody noticing). Cuaron set an example of his directing successors, the most obvious being Yates’ adherence to a gray palette to hyper-emphasize splashes of color during his various action scenes; unfortunately, it seems that Yates may have taken to Cuaron’s aversion from literalism too far, resulting in the invariability of making the Potter series increasingly streamlined at the cost of comprehension. 

It’s inevitable that people with disagree with me on various points, but I fundamentally believe an adaptation must have an equal balance of the original narrative’s cohesion and offer something new artistically. What this balance is difficult to say until the final product comes to fruition, and solely up to what the writer and director have to say about it. 

Recommended Reading/Links

Jason Reitman in Conversation - director Jason Reitman (Thank You For Smoking, Juno, Up in the Air) talks about film and his take on adapting books into film, and why The Catcher in the Rye is unfilmable. 

Albus Dumbledore versus Voldemort – the clip of interest I mentioned above. 

Harry Potter and the Deathly Hallows Part 1 review – by Todd McCarthy

Harry Potter and the Deathly Hallows Part 1 review - by A.O. Scott

Synecdoche, New York – Part I of Analysis

Synecdoche (pronounced /sɪˈnɛkdəkiː/; from Greek synekdoche (συνεκδοχή), meaning “simultaneous understanding”) is a figure of speech[1]in which a term is used in one of the following ways:

  • Part of something is used to refer to the whole thing (Pars pro toto), or
  • A thing (a “whole”) is used to refer to part of it (Totum pro parte), or
  • A specific class of thing is used to refer to a larger, more general class, or
  • A general class of thing is used to refer to a smaller, more specific class, or
  • A material is used to refer to an object composed of that material, or
  • A container is used to refer to its contents.
– From Wikipedia

I had the fortune of watching Charlie Kaufman’s Synecdoche, New York over the weekend, a viewing long overdue since its theatrical debut in 2008. Having seen how polarized and divided critics were on Kaufman’s vision – from enthusiastic praise to scathing scorn – I was curious to see why exactly one of my favorite writers could possibly enthrall and enrage critics all around. So after finishing Synecdoche, New York, I definitely saw why Roger Ebert considered it one of the films to be studied in film classes for years to come, simply because it’s that kind of movie.

For those unfamiliar with the film: Caden Cotard (Phillip Seymour Hoffman), a skilled theatre director, realizes he is slowly dying from a mysterious autoimmune disease, and hits rock bottom when his wife Adele (Catherine Keener) takes their daughter Olive and leaves to start a new life in Berlin, away from the sullen and seemingly oppressive atmosphere of their home in New York City. Unexpectedly, Caden receives a MacArthur Fellowship, allotting him money so he can explore and endeavor upon his own artistic ideas. With this, he gathers an ensemble cast into a warehouse in the Manhattan theatre district, directing them to create the greatest, most revolutionary play of all – a look into the cold, unspectacular aspects of real life.

After the credits rolled on in, I sat at my desk for a few moments to take in what I’d just experienced: a maddening tale of one man’s delirium and coping mechanism with death; a look into the obsession of the creative process; the odd, ungainly and inexplicable visual detail that intentionally stuck out like a sore thumb the entire story course; sudden leaps in chronology that could make Kurt Vonnegut pause for a few moments; or perhaps even a sad portrait of the sad life of a genius, and much more. Synecdoche, New York is that kind of movie – the one that takes more than one viewing to see all of its nuances, perhaps faulty editing and all.

Having given the film some adequate (but certainly not enough) musing, I thought of this: in Caden’s obsession to replicate every aspect of his life into the ultimate replica play, he effectively becomes the theatrical master of hindsight – a feat not too dissimilar to documentaries, photojournalism, or even reality television shows.

Hindsight is one of the most dastardly aspects we could ever hope to indulge in. “I should’ve, would’ve, could’ve, why didn’t I, why did I…” – the infinite possibilities could you drive you mad with regret if you don’t learn something from past mistakes to change your course of action for the future. In Caden’s case, his entire life is one of regrets: before Adele leaves him, she comments that he is a disappointment, invariably setting off a chain of events which drive Caden to constantly look at hindsight, to continuously reevaluate his past actions in order to feel worthy in Adele’s shadow – a feat he never personally accomplishes until the very end. By constructing the ultimate reality play – from buildings to people playing people playing people – Caden attempts to explore the mundane aspects of his life that have already happened, almost a therapeutic retrospect project so that he can understand why everything in his life seems to be falling apart slowly and surely.

Caden’s efforts are not so different than the nature of a reality show, albeit on a grander and monumental scale. Like Caden’s magnum opus, reality TV shows are always after the fact, a look into events that have happened only months before. Edited for the sake of marketability, these shows are deeply personal to the players involved, only to be broadcasted hereafter to a greater, wider audience. The only saving grace between the viewer and the person on screen is the television screen itself, and the passage of time between the initial filming and eventual broadcast.

For Caden, however, there is almost no barrier between reality and hindsight, a product of his personal obsession to make his play absolutely perfect and unequivocally unspectacular. This minimalistic (if nonexistent) barrier eventually drives the actors to depression, perhaps madness, and death – a symptom of reality and hindsight becoming broadcasted too close to one another.

The question now is whether or not Caden successfully breaks closer to reality than any other artist before him, or if he simply dropped into the abysmal obsession of recreating and replica crafting – that is, whether or not Caden taps into the reality of human nature with his magnum opus.

Perhaps the first question we must consider is what the nature of human is. For instance, is it so far-fetched to consider that perhaps on some level, documentation dilutes events already past? And to what extent of documentation and publishing/broadcasting/performance does the portrayal become less adherent to the reality that once was? More importantly, through whose lens are we considering the events taking place, and to what extent is this lens subjective?

What we can say about Caden and his synecdoche of New York City is that deep down, he is a man who simply wants to be loved. He has made choices in life that resulted in Adele’s ultimate rejection, and his visionary play becomes almost like his last hope of ever feeling self-worth in Adele’s eyes. The remainder of his life is a constant catch up chase, a mistake-correcting cycle that revolves solely around his desire to create something undeniably perfect from all perspectives, and the inevitability that death and time effectively neuter his last living years of artistic obsession, and that he will never, ever find closure with Adele.

Analysis to be continued…

Recommended Reading

The best films of the decade - Roger Ebert

O, Synecdoche, my Synecdoche! – Roger Ebert

The Chuck Klosterman Interview Part 2: 30 Rock, Mad Men, The Office, Arrested Development, and Why Movies and TV have made us less human – Hunter Stephenson of /Film

"Monsters" – A Cinematic Defiance of Genre Conventions

Gareth Edwards’ Monsters defies genre conventions much in the same vein The Host did back in 2006. It outrages fans of rampant, ravenous beast and excessive gore by going back to classical aspects of fear, where the worst dread is not the cause itself but the anticipation of the cause becoming present. 

Looking at the Rotten Tomatoes consensus, I see that it describes the film as “[not] quite living up to its intriguing premise, but [Monsters] is a surprising blend of alien-invasion tropes, political themes, and relationship drama.” This description does the film little service, if any: not once did the film or advertisements claim specifically what Edwards’ cinematic vision would offer, nor does it explicitly explore political themes or relationship dramas. A more accurate description would be this – that Monsters offers a unique perspective into the disaster-monster movie by focusing not on the initial event itself, but the events thereafter and how we humans have simply learned to adapt to such an effect. Offering a incredibly plausible concept from a biological and evolutionary perspective, Edwards’ does what almost no modern horror, disaster or monster film director comes close to – build up an intimate relationship between the characters on screen and the audience, and to sustain us on technical-visual excellence at the same time. 

The Appeal of the Monster Genre


Monsters unite humans. For whatever reason they go about terrorizing civilization their very existence gives us reason to set aside differences and to instinctually fear for the survival of the human species. In a strange sense, their (un)natural existence presents to us a entity that is beyond our immediate understanding, a sort of shock-horror appall paired with a awe of something so powerful, so unimaginable that for a few split second the gut instinct is mixed with terror and amazement. 

Monsters rarely have any distinguishable personality or motivation other than to terrorize the bejeezus out of us wee people. Godzilla destroyed Tokyo, and King Kong thrashed about New York City; yet it happens that in all of these famous monster conventions it somehow never occurred to these beasts that they could plausibly terrorize some other species, like gorillas, lions, or those bastard dolphins who seem to think so highly of themselves. No, somehow we humans offer or threaten something more to extraterrestrials or mega-sized organisms, whether it be our brains or simply our capacity to be stupid. Either way, monster convention states that the very existence of humans lends itself to jealousy, and that in a moment of absurdity some giant thing will usurped all peace and harmony for the end result of wiping out humanity. 

There’s an inherent human-centric side to monster films, in this respect. By making ourselves the sole victim (and perhaps victor) of a (un)natural battle frontier, it goes without saying how much people commonly believe we are somehow “above” nature, and that this de facto instantly makes us the target of enraged megabullies to get out of their slumber and throw rocks at skyscrapers. Somehow, while effectively naked and without many natural defenses other animals have like elongated fangs, rock-like skin or weight to throw around, we humans still manage to upset somebody, and somebody big. 

Yet, perhaps another angle on the monster genre is one of environmentalism: by procreating and developing so extensively into the earth, humans have effectively ravished the natural environment for its fertility; distraught and angered, these monsters retaliate violently to shut us down, to slap us silly into existential humility. Still, this doesn’t account for why aliens would simply fly on down through the ozone to zap us away, and again reinforces the primary idea that the monster convention is inherently human-centric. 

Why Monsters defies the monster convention


From a production POV, Gareth Edwards does wonders with only a five person crew including himself and the two actors. His feats include being the director, writer, and cinematographer/director of photography for the film, as well as improvising (and encouraging his actors likewise) at each location, and using Adobe Autodesk 3ds Max to create the spectacular special effects – all for only $15,000 (comparatively, Michael Bay spent on $200,000,000 on Transformers 2: Revenge of the Fallen). Edwards’ production epitomizes independent filmmaking at its best, demonstrating that good science fiction does not need to splurge on millions of dollars to be successful or even tangibly good – a trend that invariably began with George Lucas’s blockbuster success with the original Star Wars back in 1977. 

More impressively, Monsters immediately defies convention for multiple reasons: 

  1. Chronologically, it does not take place at the initial event of interest when a monster/alien first arrives, 
  2. The ‘creatures’ are biologically tangible, 
  3. They are not out to terrorize humans, 
  4. The two protagonists are not extraordinary, 
  5. People don’t change, and do. 

Chronologically Hereafter


By not focusing on the initial time of conflict – where creatures and humans first collide – Edwards focuses in on a much more subtle and quieter aspect of human nature: our ability to grow numb to the aftereffects of devastation. At one point in the film, the girl Sam (Whitney Able) turns on the television and sees a newscast about another tragedy/political event regarding the creatures, and her only reaction is to yawn soundly and plop back onto her bed. It’s a subtle detail, and an effective one: ask yourself, how many times have you turned on the telly to see something about the Middle Eastern conflict, or the BP oil spill, or the Haiti earthquake, or even the clean up efforts of Hurricane Katrina and simply found yourself a bit apathetic to what has already happened? 

This is a important aspect of human nature that few filmmakers of the monster convention explore, simply because it is less spectacular and less glamorous to focus on so. Millions of people die every year, yet somehow a train wreck phenomena – in which a great number of people perish in a relatively short time span – unites the world in both horror and sympathy, a immediate common symbol for our capacity to care and act accordingly when disaster strikes. Reality, of course, is that after a few months have passed most people have moved onto the next big news, the next big thing in the public conscious. 

In Monsters, this chronological aspect gives the movie’s premise something much more substantial and humanistic. The presence of the creatures is universally accepted, and while they still pose a risk to those in certain areas everyone still goes about their daily lives – culturally, socially, politically, and interpersonally. 

Biologically Tangible


A fantastic choice on Edwards’ part was to make the creatures simultaneously extraterrestrial yet biologically tangible as well. The movie states that a NASA probe was sent into space to collect samples came back to earth and crashed into the ocean around Central America, resulting in alien lifeforms infecting and mutating cephalopods into what the world now knows as 'creatures.’ They lay their eggs on trees (thus creating “infected zones”) and travel hundreds of miles to procreate; it’s implied that they are drawn to electricity for reproductive reasons, perhaps for sexual display (not dissimilar to a peacock’s vibrant feather arrangement) or metabolic stimulation, or both. 

From a biological point of view, this is absolutely ingenious. Fleshing out the physical presence of the creatures not only grounds them in a sense of reality, but perhaps even a bit of plausibility in the world of Monsters. Foremost, marine creatures are perhaps some of the most extraterrestrial-like creatures marine biologists can account for, and more; it’s more than possible that there are other deep sea creatures we have yet to encounter, let alone account for with current technology. With this in mind, it’s not incomprehensible why so many aliens seem reminiscent of creatures fathoms below – tentacles, cold flesh, bulging eyes, non-mammalian, how could we not subconsciously be influenced to project our ideas about these non-terrestrial beings into ideas about unnatural, extraterrestrial terrors coming down to earth? (An explicit example of this sea creature projecting is easily District 9, where the alien lifeforms are derivatively called 'prawns.’) 

Their presence is a biological phenomena, perhaps extraterrestrial but biologically sound nonetheless. Realistically, they won’t always be seen at a given time, which Edwards wisely chooses to depict so from an aesthetic and budget choice; this results in an increased tension and intensity of each scene, since for the most part we rarely see or hear the creatures talked about so adamantly by everyone. This lacking presence creates more impact when we actually see or hear the creatures, perhaps even a mysticism and awe at the same time. 

There’s one scene where Sam and Andrew are at a rest stop on their journey, and in the deep jungles they hear a creature roar. A nearby soldier raises his gun, and we can hear some rustling in the deep forest background; for a few moments we are enraptured by the scene’s tension, unsure as to whether or not the creature will make itself present and mark itself as an immediate threat. Edwards takes the time to let the time pass, letting both the characters and the audience hold their breath until the threat passes on – exactly like how nature functions in real life. 

The Unwitting Terrorizers


“What is life than to keep meat fresh?” – Doctor Who

Closely tied in with the biological tangibility of the creatures in Monster is their implied drive to reproduce, a drive that is ubiquitous to perhaps every living organism inhabiting planet earth. Their motivation is primitive and plausible, and humans are only unfortunate enough to now be sharing the same environment with creatures otherwise capable of wrecking havoc along their journey to consummate and procreate for the survival of their species. If a few humans happen to get trampled here and there, that’s just the reality of survival of the fittest (and in this case, who can throw their weight around the best). 

Monsters is all about survival and favors no one. It’s told from the perspective of people, which only makes sense because the production crew and target audience are presumably human; but otherwise Edwards makes no outright statement that the creatures are horrible entities with some ulterior motive to destroy humans. They are simply coexisting in the same environment we are, driven by the same instinct to survive, proliferate and extend into the next generation – a natural phenomena of biology that does not determine good or bad, but simply dictates what lifeforms can survive and exist in a particular environment at a given time. 

This stance on the monster genre detracts away from the human centric convention, perhaps even deflating the human narcissistic tendency to believe we are above other living organisms. The truth is that beyond physical forms and cognitive abilities we humans are no less different than any other species inhabiting the earth: the inherent instinct is to survive, and to survive as a species sexual reproduction is a must. The drive for a sex is a powerful one, and arguably connects our own existence to that of other organisms around us. 

The main conflict between the creatures and people in Monsters lies solely in the environmental niche the creatures occupy, and how their proliferation threatens to strain where people are currently able to live safely without worry of resource competition. There is no human-centric conflict at hand, which perhaps detracts away from our natural tendency to pride ourselves as human. Really, in Monsters we humans are just in the way of a new, emerging species that just wants to reproduce and proliferate on earth. 

Unextraordinary Characters


There’s something to be said about depicting two characters that are neither heroic or extraordinary, but simply two people who find themselves in a tough situation which invariably draws them closer together. 

It all begins with Sam Wynden running away to Mexico, and then her executive father requesting his employee/photojournalist Andrew Kaulder (Scoot McNairy) to take her back home to the states, where her fiance awaits. Initially bound by Wynden’s father’s request, Sam and Andrew grow close because 1) Andrew finds Sam attractive and 2) Sam can relax around Andrew, and feel comfortable too. 

Many reviewers have commented that Monsters centers around a love story between the two leads. I disagree, primarily because loneliness is the primary emotional drive between the two, and love is perhaps a secondary symptom of their relationship. While I wouldn’t call ita Lost in Translation of the monster convention, Monsters’s substance is in the same vein of Sofia Coppola’s quiet character study: a situation pairs two people into an unlikely company, and the weight of their relationship lies solely on the situation which brought them together in the first place. 

It’s implied that Sam and Andrew have turmoiled lives beyond their current get-through-the-infected-zone-and-stay-alive situation, and that perhaps the dangers of being trampled and killed by a creature, while intimidating and terror inducing, is only temporary compared to their chronic situation at home. There’s one scene where finally, at a gas station awaiting an army rescue troop, Sam and Andrew make phone calls separately – Sam to her fiance, and Andrew to his biological son. It’s a poignant scene because up until now, we’ve more or less been engrossed with Sam and Andrew in a situation centered around creatures possibly coming out and threatening their position; now presumably safe, the two separate temporarily to make their personal phone calls, and while they cannot see each other we can see a marked difference in how they act over the phone: Sam’s fiance is cold and perhaps overbearing, and the stiffened way she talks with him implies that she not only ran away from him, but also from her father with whom she similarly interacted with over the phone earlier in the film; Andrew emotionally breaks down after talking to his son, and forces himself to keep a steady voice while tearing up at the sound of his son’s ecstatic voice. 

The scene effectively rules out the primary emotional connection of love between Sam and Andrew, suggesting instead that they are bound by desperation and sadness recurrent from their lives outside the presence of the creatures. It’s an incredibly moving scene, and perhaps marks Monsters as one of the few monster-disaster films to truly flesh out a real, substantial couple of protagonists. 

People don’t change. And people do. 

Monster convention commonly indulges in the notion that a disaster changes character, and more often than not for the better. In Monsters, people haven’t changed much since the creatures became integrated in the world: politics are still in perpetual turmoil, political statements are just as pervasive, and money means everything.  On one occasion, when trying to get back to the States by ferry (the safer route), a Mexican official charges Sam a total of $5,000 for one ticket in a blatant rip off; on another, the woman Andrew has a one-night stand with ruffles through his bags and steals his and Sam’s passports. In a world with giant creatures, people are somehow still motivated by money. 

There’s a funny dialogue between Sam and Andrew where she asks if he has any qualms about taking pictures and making money off of deaths and other’s misfortunes; he replies that doctors are just the same, and later elaborates that under Sam’s father’s publishing company, a picture of a dead child sells for a few grand while a picture of a happy child sells for zero. I chuckled and sighed a bit at this part, mostly because it’s true: a great majority of our world functions off of other’s misery, and money perpetuates it. 

However, there is one scene that struck a particularly humanistic note: it occurs after Sam and Andrew’s caravan have been attacked by a creature, only leaving the two alive. As Andrew goes out to assess the situation and pick up supplies for the remainder of their journey back to the States, he sees the body of a dead girl lying sprawl on the ground, the result of the creature initially attacking the truck in front of them. He sets down his bag slowly, taking out a camera; our initial reaction is that he will take the opportunity to take a picture or two, less he secure some amount of fortune upon returning to his regular life; however, it turns out that he was only taking out a camera to get to his jacket, which he uses to cover the body of the dead girl. The scene unravels quietly and marvelously so, and while perhaps moralistic gives a sense of humanism and hope amongst turmoil and a perpetual, subconscious obsession with something as immaterial as money. 

Closing Remarks

Monsters is not a great film, but it is certainly a fine one. Defying convention of genre and production, director Gareth Edwards’ cinematic debut is a strong one, and definitely a noteworthy one to date. 

There have been numerous comparisons of Edwards to director Neill Blomkamp of District 9, as well as comparisons between Edwards’ and Blomkamp’s films. I feel, however, that this comparison is unfair: Blomkamp had the luxury of Peter Jackson’s producing and residual budget from an unmade Halo film, and while District 9 began with an intriguing premise rife with political and social implications it eventually fell way to a generic space opera convention. Conversely, Edwards effectively funded Monsters on a pennies-equivalent budget, and wisely kept the tone of Monsters constant throughout without promising anything it neither accomplished or aspired towards. 

If anything, go see Monsters to see why there are still avenues in the science-fiction/monster-disaster genre to explore, and why a good story with a strong and dedicated vision can accomplish things otherwise unfathomable. Monsters will undermine and surpass your expectations, guaranteed. 

Recommended Reading and Links

A couple of 'Monsters’ postcards I picked up in the Landmark Theatre lobby after the credits began rolling. 

'Monsters’ review – by James Berardinelli

How Gareth Edwards shot 'Monsters’ on an Incredibly Low Budget – /Film, video included

'Monsters’ offers up a new view on classic giant monster movies

Director Gareth Edwards, and actors Whitney Able and Scoot McNairy talk about the production and filming of 'Monsters’ – /Film