To restart the randomised series of posts (follow the link if you need a catch-up or want to read the earlier attempts at this), I thought it might be fun to try out some Pixar films, beginning, with no logic or reason whatsoever, with Monsters, Inc. from 2001. The way it works is this: using a magic bit of random number selection, three frames are generated from anywhere in the film, and provide the basis for a discussion of the film from unexpected angles, or at least angles that are not pre-selected to flatter my own interests. It’s usually fun, and the film needs little introduction, so let’s dive straight in. The randomiser has selected frames from the 5th, 66th, and 86th minutes from Monsters, Inc., which means we begin with…
[See also Toy Story 3D]
I hope that the makers of both Shrek Forever After and Toy Story 3 will keep good on their implicit promise that these are the concluding chapters of their respective franchises, but for very different reasons. While the Shreks have become increasingly tired, desperate, repetitive and, by becoming what they used to mock, cynical, the Toy Story team have miraculously kept things fresh, developing their ideas rather than chasing their own tail for one last elusive chew of the same old piece of meat. Shrek Forever After moves quickly enough that you might not notice how heavily it is wheezing, hoping to squeeze a bit more milk out of the CGI teat before you get too bored. Toy Story 3, on the other hand, makes a virtue out of the story’s frailty: as a trilogy, Pixar’s three films have grown into an achingly beautiful introduction of themes of mortality, obsolescence, the passing of time and making the best of what you have before it’s gone. It’s about death, ageing and decay. You know, for kids? Instead of fabricating some tosh about wishing on a star, your dreams will come blah and your prince will meh, Toy Story reminds that you’re going to die – don’t waste the time you have in denial. Embrace the ephemerality of life – it’s what makes it delicious and thrilling. As this film heads towards its end it becomes clear that the toys are heading for retirement, and the suspense becomes about how they’d like to go out – fighting, passive, dignified, accepting?
Hopefully, kids won’t come away with a feeling that they’re hurtling towards the grave, though. Beyond that wish, I won’t try and second guess what an 8-year-old will find loveable about this film. I’ll just speak for myself. And I’m determined to keep this short and pithy, not least because you’re going to die, and you’ll be wanting to make the most of the time you have left.
The latest in my semi-random, long-neglected series of asides on special effects continues with the concept of the “reveal”. This is that moment when you finally get to see the spectacular object that has been withheld from you for so long. A good reveal will not just happen, but will be the culmination of a series of gestures that draw you in to a state of curiosity, suspense and anticipation. In short, if they’ve spent a lot of money on their biggest selling point, they’re going to make you wait to see it.
[See also Toy Story 3: All Things Must Pass]
The most startling thing about the new 3D version of Toy Story is that it seems perfectly designed for the format. Objects poke out at the camera and there are point-of-view shots: check out the scene where Woody and Buzz are carried into Sid’s house, peering out through a gap in zip of the bag they’re trapped in. The teeth of the zip frame a deep view into the house, while Sid’s dog Scud snarls and jumps at the camera. The tension between foreground and deep space seems to have been designed for 3D, as does the thrilling climactic chase, but no shots have been changed for the re-release. And because Pixar didn’t go for any gimmicky in-your-face 3D tactics in the first place, the story is still allowed to breathe without the distractions of peekaboo spectacle. Some of Randy Newman’s songs (“Strange Things” and “I Will go Sailing No More”) are still distractingly literal in describing (or dictating?) the onscreen montages (though all will be forgiven by the time he gets round to the emotional whallop of “When She Loved Me” in Toy Story 2 – I’m welling up just thinking about it…), and the Svankmajer-for-kids grotesqueries of Sid’s room are still an imaginative highpoint even for Pixar. The animation, groundbreaking though it was fifteen years ago, has dated slightly, particularly in the animation of human figures, and most tellingly in the character of Scud: they simply didn’t have procedural modelling programs in place to generate realistic fur, so poor Scud looks a little plastic. (Take a look at this feature about Pixar’s RenderMan system for a glimpse of how much the company innovated in the development of animation.)
Some things haven’t changed at all, though, and Toy Story still impresses by the sheer elegance of its plot: every scene blends imperceptibly into the next step in its development without revealing the mechanics of how it will manouevre all of its characters along the formulaic steps of its life-lesson journeys. The film flies by not just thanks to its breezy, witty script (and peerless voice cast), but because there’s not a moment of slack or digression from its simple tale succinctly told.
So, what’s with the 3D and how on Earth did they do it? You can hear director John Lasseter talking about it here, and read more about the conversion to 3D at the New York Times. Of course, its not an automatic process, even when dealing with digital elements (which are surely easier to convert than live action footage); a second camera has to be placed in the virtual space of the film to create the illusion of depth, and a team of “stereographers” had to select which parts of the frame to pull out and which to push back, and how far to push or pull them (by varying the distance between the two “cameras”. Often, that must be a fairly logical choice, bringing foreground elements out and pushing the backgrounds into the distance, but it’s interesting to hear how lead stereographer Bob Whitehill made some of those choices:
When I would look at the films as a whole, I would search for story reasons to use 3-D in different ways. In Toy Story, for instance, when the toys were alone in their world, I wanted it to feel consistent to a safer world. And when they went out to the human world, that’s when I really blew out the 3-D to make it feel dangerous and deep and overwhelming.
Thankfully, the effect is not distracting, and never used to excess: Pixar have not cheapened their work with a gimmicky makeover, and judging by hushed Saturday matinee crowd with whom I watched it, Toy Story still has the ability to enthrall. Being armed with the foreknowledge that its sequel will be even better doesn’t hurt, either…
In some of my earlier writing about special effects, I regularly found myself banging a particular drum, and eventually had to stop myself getting repetitive. In my research on special effects, I continually found practitioners, and some critics espousing a belief that virtual actors were soon going to reach such a perfect state of simulation that spectators would be unable to tell them apart from the real thing. The following comes from a paper I wrote for Stacy Gillis’ collection The Matrix Trilogy: Cyberpunk Reloaded:
It would be all too easy to fall for the suggestion that the age of the synthespian is imminent, and that soon human actors will interact with computer-generated co-stars without the audience realising which is which. Will Anielewicz, a senior animator at effects house Industrial Light and Magic, promised recently that “Within five years, the best actor is going to be a digital actor”. The apotheosis of an animated character into an artificially intelligent, fully simulacrous figure indistinguishable from its carbon-based referent is technically impossible, at least in the foreseeable future, but visual effects are not definitive renderings of a character or event but indicators of the state-of-the-art offering “a hint of what is likely to come” (Kerlow) in the field of visual illusions in the future. It is understandable that such a competitive industry needs to maintain interest in the potential of its products, but the mythos of the virtual actor has infiltrated the Hollywood blockbuster in recent years [...]
For the record, I regret the phrase “technically impossible”: I think the barriers to producing perfect synthespians are not primarily technical, but cultural and economic (if there was enough demand, money would have been found for even more research and development even sooner). I was using the “mythos” or concept of the virtual actor, the belief in inevitable progress, as an example of the kind of teleological argument that I wanted to unpick. It wasn’t hard to find it surfacing in other places. This from another essay in James Lyons and John Plunkett’s Multimedia Histories: From the Magic Lantern to the Internet:
Kelly Tyler, of NOVA Online, a science-based website, has identified the photorealistic human simulacrum as “a new digital grail.” Damion Neff, an artificial intelligence designer for Microsoft video games has called it “the Holy Grail of character animation.” In his keynote address at the 1997 Autonomous Agents Conference, Danny Ellis listed the emotionally intelligent virtual actor as one of four “holy grail” in the field. In May 2003 John Gaeta, discussing his visual effects work on The Matrix Reloaded in the Los Angeles Times, referred to a believable digital human as “the holy grail” of our world. It seems that the Grail analogy has found some currency, at least amongst those working in the relevant creative industries. This frequently uttered analogy sums up the suggestion that technologies of visual representation have been working inexorably towards a final goal, but they might also inadvertently hint that such a goal is essentially elusive.
The development of special effects over time suggests scientific progress as motion towards a logical conclusion, their development effected by a series of refinements and improvements to existing mechanisms. Certainly, computer-generated imagery, with its increasing photographic verisimilitude permitted by faster processing speeds and more efficient rendering software, appears to be advancing at a quantifiable rate, implying a final destination of absolute simulation, a point where a digital human being can be rendered to a level of detail indistinguishable from actual flesh and bone, and possessing enough (artificial) intelligence to be a star offscreen instead of just a hyperreal cartoon upon it.
So, how can this teleology by questioned? How do we construct a more continuist approach to historicising these spectacles in the face of such persuasive technological progress? By drawing the focus away from the dazzling verisimilitude of illusory technologies and focusing on the conceptual questions which underpin their fascinating surfaces. We can observe antecedents of the virtual actor and note that the same spectacular strategies, prompting the same ontological questions, were in play.
That’s enough self-promotional recapping. I hope you get the message that, whatever actual developments there might be in imaging technologies that can simulate properties of human figures, there is another narrative here. As Lev Manovich put it in The Language of New Media, “throughout the history of computer animation, the simulation of the human figure has served as a yardstick for measuring the progress of the whole field.” So, just to make sure you stay interested, every now and then some industry insider pipes up and tells you that you’re just months away from being fooled into believing in a bit of CGI as a living, breathing person. So, here we go again. Only this week, Image Metrics have announced that they’re very close to the Holy Grail. Take a look at this video of actress Emily O’Brien:
[Find out more about the process, or watch a higher resolution version of the video here.]
Pretty impressive stuff. As you can probably tell, her face has been digitally captured and mapped over her actual face, not because it’s a useful thing to do, but because it puts the digital and the flesh versions of Emily close enough that we can compare them. Presumably, the real benefits of the process will be seen in applications that can map her face onto her stunt double, or onto another actress if Emily, heaven forbid, suffers a terrible accident halfway through shooting a very expensive blockbuster movie. Or it might help our already beloved stars transcend the limits of their own bodies. Here’s the original post I spotted on IMDB:
1 January 2009 1:30 AM, PST
“Silicon Valley is on the verge of producing sophisticated software that will allow motion picture companies to create actors on a computer who are visually indistinguishable from real people, San Jose’s Mercury News reported today (Thursday). In the words of the newspaper, which closely follows the sofware industry, when software engineers finally achieve what it calls “the holy grail of animation,” stars would be able to “keep playing iconic roles even as they aged past the point of believability like Angelina Jolie as Lara Croft or Daniel Radcliffe as Harry Potter.” Rick Bergman, general manager of AMD’s graphics products group, told the Mercury News that his company “is getting real close” to producing computer-generated actors that will look identical to real human beings.”
Does anyone honestly believe that there is a call or a need for technologies for sustaining the shelf lives of these people? Of course not – it’s just a distracting excuse to avoid the real explanation, which is closer to “we’ve got this cool gadget, and we really want to show it off.” I remain skeptical about claims of impending perfection in virtual actors not because the technology isn’t impressive, but because the grand deterministic narrative of progress is overriding the reality of the situation. One savvy poster over at the Blu-Ray forum says it all in a manner that needs no elaboration from me:
Oh, dear lordy, they (meaning, JU) posted the article here, too…
The same exact article (with different star insertions, hence the ’01 dated Tomb Raider ref) that studios plant once a year by the clock, in the hopes they can finally get that CGI resurrected dead-Karloff-and-Chaney Frankenstein vs. the Wolfman movie going again, because it sounded like such a neat idea back when Forrest Gump shook hands with JFK–
Twelve years, Final Fantasy, and Beowulf later, and they’re STILL trying to sell us on “With virtual actors, we could bring George Burns back from the dead, and he’d look so real! “
In case you needed more convincing, I dug up this old article from The Boston Globe, 23rd May 1999. There’s that grail again:
It sounds like a plot from a sci-fi pulp, or an old B movie: A snaggle-toothed scientist toils in the laboratory, perfecting his creation. A touch-up here, a tiny tuck there. But this is not some green-gilled monster from the house of Dr. Frankenstein; it’s a realistic human with shiny hair, glittering teeth, and liquid eyes. The pressure is on to beat other genetic geniuses racing to create human clones. Suddenly, there’s a burst of energy – and voila! – the model comes to life. Blink your eyes, and it’s Marilyn Monroe. Blink again, and it’s James Dean.
This scenario isn’t as far-fetched – or as far off – as it might once have seemed. In this case, the scientists in question are digital doctors: computer programmers developing the software needed to create a photorealistic digital actor, or ”synthespian.” Special-effects wizards have already created convincing digital dinosaurs and dolls, aliens and ants, stuntmen and superheroes. And the two biggest box office draws of the moment – The Mummy and a certain prequel that unfolds in a galaxy far, far away – showcase digital creatures.
So why not digital humans? Why not virtual stars?
”The digital actor has been the Holy Grail forever, since the dawn of 3-D computer animation,” says Brad Lewis, executive producer of visual effects and vice president of Pacific Data Images, the firm that gave life to insects in Antz. ”There’s always been someone trying to do a hand or a face or some aspect of a human being that looked real.”
Some say realistic digital humans will be on-screen within the next five years. These synthespians could be brand-new characters or reincarnations of old legends, long cold in the grave. One Hollywood producer, for instance, is planning a film that would resurrect martial-arts phenomenon Bruce Lee; another is reportedly working on a digital version of an aging screen star (rumored to be Marlon Brando), restoring his youth and making him a contender for a manly role. Another producer got permission to re-create the late George Burns in a film called The Best Man, and a California firm, Virtual Celebrity Productions, has obtained the rights to digitally reproduce a handful of stars, including Marlene Dietrich and W. C. Fields.
Hang on. Did I read that correctly? W.C. Fields? I can understand how Dietrich’s pictorial stillness might translate relatively easily to a digital avatar, but is there anyone less suited to a CGI resurrection by the pixel pixies than W.C. Fields?! Well, they gave it a go a few years back with the Gepetto software (nice puppet reference, there) for real-time 3D animation. I can’t say the results were ever going to replace the memory of the real thing, but that’s not really the point. These rumours and promises build anticipation, expectation, and a sense that something is at stake. The defiance of death, age and human inadequacies is just a cover story for the real business of special effects.
Look at the Curious Case of Benjamin Button. The stated aims of the film, in which Brad Pitt’s character ages “backwards”, might be to integrate visual effects so seamlessly that they don’t distract from the character-driven, Oscar-baiting emotional truth of it all, but there’s no getting away from the fact that, by centralising the concept of a spectacular body like Benjamin’s, a magnet for diegetic and extra-diegetic curiosity, the film can’t help but draw attention to the visual effects used to achieve the concept’s visualisation. Pitt’s body becomes a laboratory for all kinds of tricksy bits of CG animation and performance capture, and there’s a complex connection between the fascinated gaze that attaches to the character’s condition, and the one that fixes on the image of a movie star transformed into a recognisable but fundamentally changed series of physiques by means of cinematic tricks. When Benjamin strikes muscleman poses in the mirror, it’s as much about technological display as it is about his own narcissistic enjoyment. The discourses around the futuristic capabilities of digital imaging technologies shape expectations about how a particular special effect is to be viewed and appreciated. There’s an element of promotional hype in play, but by providing a set of measurable goals and projected rationales, the impression given is that special effects are contributing to a worthwhile cause with a pre-determined path, instead of offering random and occasional attractions; it all makes sure that you stay interested, and keep buying a ticket for the next attraction, and then the next. Special effects, like movie stars, exist intertextually – they provide reassuring continuities: we are expected to keep watching how they develop from film to film, how each is an improvement upon the last – so it makes sense that a certain weight of expectation should hand on the predicted hybrid of a special effects movie star, an all-digital, perhaps artificially intelligent character actor who passes for flesh and blood before your very eyes. But to truly deliver on that promise to deceive would defeat the object of the special effect, which was to attract and hold that multi-focus spectatorial gaze. What’s the use of a spectacular attraction if nobody notices it?
Lisa Bode, ‘“Grave Robbing” or “Career Comeback”? On the Digital Resurrection of Dead Screen Stars’, in History of Stardom Reconsidered. Edited by Hannu Salmi et al. (Turku: International Institute for Popular Culture, 2007) Available as an eBook at http://iipc.utu.fi/reconsidered/.
It’s been over a week since I’ve blogged – towards the end of a semester, things seem to get much busier, and I start to feel a bit stupider, so it’s harder to sustain my more prolific bursts of writing. Perhaps my previous post was long enough to keep you busy (or fed up of me) for a while. Thanks to all those who’ve commented on my Cloverfield paper: I always enjoy receiving feedback, negative or positive, so feel free to add your thoughts to the discussion. Hopefully, things will ease up a little and I can devote a bit more time to this. To reboot things, I thought I’d play to my specialism and start a series of short posts about special effects. This is partly to expand upon and clarify points from Performing Illusions, and also to include some ideas that were too late to make the final cut of the book. I’ll be making these up as I go along from time to time, but I’m happy to take requests. It’ll be a good way for me to take notes as I go, and hopefully provide some interesting reading.
To kick things off, here’s a little analysis of a great sequence from a far less great film, Spider-Man 3. About halfway through describing the plot of a superhero film, I usually pause for breath and realise how ridiculous it all sounds. Teenage boy sprouts web-spinning glands and dresses up in natty spandex togs to fight crime. Meanwhile, some other dude gets exposed to some sciencey stuff that turns him into sand. It’s not exactly Death in Venice, but this kind of story has become so familiar that we barely bat an eyelid when some new fancy-dress vigilante takes to the screen. Stop and think for a moment. Peter Parker is at school. Then he gets bitten by a genetically modified spider and picks up some arachnid tendencies. Why are we not laughing this stuff out of town? Partly, I think, it’s because of the familiarity: we’ve seen a lot of superhuman heroic figures over the years, whether it’s Achilles, Aeneas, Hercules, Perseus, Jesus, Beowulf, Gawain, King Arthur, Superman, Batman, Iron Man, Barack Obama or token female Wonder-Woman. But also it’s because we understand the allegorical function of these characters. Whether it’s Superman as a refugee migrant who has to change his name and act like a local to gain acceptance in society (while secretly saving the world’s collective ass), or Spider-Man playing out his awkward years of bodily change and early-career anxiety, we know that these are not portrayals of how things really are, but re-imaginings of things that are easier to talk about and popularise if we dress them up in shiny clothes and pit them against a series of similarly allegorised embodiments of villainy/social evils.
That’s my starting point here, but I suppose it’s not strictly relevant, except to say that Spider-Man 3 operates (because it is a sequel) in a pre-existing alternative world where scientific exaggeration is an accepted form of expression, with certain agreed limits on what may occur: there’ll be no “magic” here, just scientific principles extruded to a degree that probably constitutes impossibility, all the while remaining anchored in a logical basis (however tenuous) that isn’t there to make incredible events believable, but comprehensible.
The scene in question is an origin sequence. We get to see how the Sandman came into being: as such, it offers a spectacle of incarnation, animating an apparently living body out of inanimate materials. It is structured between the bookends of these two states, beginning with an extreme, near-microscopic close-up of grains of sand, which gradually cohere into an image of the actor Thomas Hayden Church. This demarcation of the set-piece is a common trope in this kind of foregrounded spectacle – it has clear entry and exit points and stands alone as an autonomous performance, even as it offers some narrative information; It possesses a limited colour scheme of browns and greys (er… it’s sand-coloured), and the lack of dialogue or peripheral characters further enforces the self-containment.
Witnessing the birth of the Sandman, one of the pleasures comes from seeing a two-dimensional comic book character transplanted into a three-dimensional, digitally rendered figure. The Sandman is the perfect CGI character: the kind of particle-system modelling used to make swarms of particles take on shapes and patterns is something that computer-graphics are equipped to do – it would be extraordinarily difficult, if not impossible, to do this in stop-motion or another kind of pro-filmic object animation. So, while the scene references older media, it focuses on graphic qualities that exude novelty and technological specificity. The virtual camera (the scene is entirely computer-generated, so it’s not entirely accurate to think about the camera being situated within the scene) executes a slow track around the central focus of the emerging Sandman. The stressed dimensionality of the sequence thus puts further distance between this and two-dimensional animation, optical process shots and puppet animation where camera movements are much more difficult to pull off. In short, the scene’s novelty value is to be understood in terms of its differentiation from prior instances of animation and effects shots.
The long take is the core of this sequence. The sustained performance of a technical illusion would seem to imply its pro-filmic authenticity: the camera never needs to cut away to or fragment the trick to hide its mechanisms in montage. However, there is no longer a logical reason to attach such notions of presence and solidity to things we see onscreen, even when the camera’s unflinching eye seems to be hinting that there are no sleight-of-hand edits, nothing up its figurative sleeve; a virtual camera tracking through virtual space to “film” a virtual object never needs to cut, and those connotations of authenticity can just as easily be translated into indications of artifice, of a lack of presence or ostentatious virtuality. But digital effects still exploit our residual expectations of photographicness. You can see it in the use of artificial lens flare to suggest that the camera is physically present, its mechanism overloaded by the scene in front of it. Lens flare is a side-effect of a camera’s registration process, and in a virtual scene it is added in order to offset the true origins of the shot.:
These techniques subtly purchase your understanding of the sequence as a wholly situated moment, recoding it not as a flurry of algorithmic manouevres, but as a live recording of an event, where some of the unplanned markings of the photographic apparatus might come into play. To cut through my verbose description, the shot, which was actually constructed in a computer, is dressed up to look like it was shot on a set. So, computer-generated effects do not erase or evade the properties of photographic media. Instead, they extend those properties to supernatural lengths: the power of the illusion arises out of the distance between the acknowledged impossibility of the event, and impression of authenticity lent to it by the markers of a situated apparatus.
As Tuesy and I sat through Robert Zemeckis’ souped-up Beowulf, two questions cropped up. The first came in several forms, but was really an inquiry into the same identity issue: “Is that Anthony Hopkins?” “Is that John Malkovich?” “Is that that one that’s married to Sean Penn?” “Is that Ray Winstone’s stomach?” The second question was “Why is everything CGI?” I guess the questions are connected, and they both spring from the film’s hybrid status somewhere between animation and live action performance.
This is one of those films that utilises cutting edge technology, relying on refined techniques for its novelty value and spectacular impact, but tempers the sense of futurity in several ways. The adaptation of a classic tale might be seen as enhancing the digital surprise by differentiating this latest version with older renditions of the same material, but it also aligns the excesses of its action with the hyperbolic, epic imagination of arcane mythology. Primarily, though, Beowulf repeatedly asserts an earthy physicality that offsets the intangible number-crunching of digital animation. From its very first scenes, it piles up images of drunken rutting, belching rambunctiousness.
There is dirt, and there is filth. Personally, I don’t think it goes far enough, and there are rather too many shiny, gleaming objects, clear complexions and superb haircuts, but the intention is clear: it purchases your belief in the presence of its characters by reminding you of their organs, appetites and bodily functions. By programming your characters to perform all the tics and workings of a body powered by digestive, rather than algorithmic processes, you can help to knock down the first barrier to engagement with animated characters, namely, the impression that you’re watching a machinic figure moving with pre-directed precision. This disavowal of computed origins is further strengthened by the trace elements of the actors. The film involves something known as “performance capture”, which designates a more nuanced version of “motion capture“. If motion capture was meant to record only the bodily movements of actors as a series of co-ordinates over which a digital body could be laid, performance capture will record facial expressions as well, transferring them to the digital avatar that will take the actor’s place onscreen. These processes descend from rotoscoping, whereby cartoons could be traced frame by frame over live action footage, and the main aim seems to be to preserve the motile characteristics and unique gait of a performer, which are difficult to mimic in traditional cel animation. So, that is Anthony Hopkins. You can see many of his distinctive gestures, and hear his voice. But this is a ghosted version of that performance, manipulated enough to qualify as animation, but with the actor’s imprint flickering in and out of view in a flash of the eyes or a curl of the lip. Angelina Jolie in particular keeps her own facial features (whether or not the rest of her anatomy is accurately reflected would require closer inspection…), but the CG chassis that embodies her onscreen is just different enough to mark her character out as a shape-shifting phantom in human form, one of the few times when the digital nature of a protagonist makes perfect sense.
As if daring you to dismiss him as a mere set of pixels, Beowulf continually exerts his phallic power by stating his name with vigour, as if that was all you needed to know, as if forcing you to remember it in connection with whatever mighty deed he happened to be conducting at the time. His manhood, hidden behind strategically placed objects, stands in contrast to Grendel’s undecorated crotch during their naked battle. As the film wears on and Beowulf bgins to doubt his own prowess, his cocky verve is undermined in a series of threats of penetration, enhanced by the 3-D tech’s duty to thrust things in the direction of the viewer, and especially in an unsubtle moment where Grendel’s mother causes his sword to melt away.
OK, that’s my bargain basement psychoanalysis bit done for today. Let’s get back to the machinery. The term “performance capture” is really just PR for a fussier version of mo-cap, since it offers reassurance that the actor behind computer graphics is carefully preserved rather than thickly painted out. It prompts the spectator to engage in a kind of microscopic inspection of the film’s surfaces, assessing the resemblance of all that dirt, water, smoke, blood and skin to its real-world counterparts. If IMAX 3-D, all-CG films are to become a regular practice in Hollywood, the technology might have to overcome that effect, since it directs the eye to the minute details, possibly at the expense of the broader issues of narrative and character. I can’t help getting distracted by the remnants of the 3-D effects every time something bleeds on the camera or pokes a spear in its direction, though I know I’m supposed to be involved by these incursions into my personal space. The technology announces itself so loudly that it attracts a nerdish, picky sort of spectatorial attention that is less interested in congratulating the latest achievements than in ticking off the film-makers for the inadequacies of the process when compared to actual human beings. The best evidence I can find for this hyper-critical attitude comes from reviews in popular newspapers and magazines. Sorry if this looks long-winded, but I want to prove by repetition that critics have settled into a kind of running commentary on the state of the art, but with a seemingly limited vocabulary for doing so:
“Humanity takes a back seat on this rollercoaster. The male characters are two-meat dishes, beefy and hammy, with a penchant for taking their clothes off before battle. (How many shots of Winston’s derriere did they think we needed?) The females (Robin Penn Wright, Alison Lohman) are glazed-eyed mannequins stuck in the shop-window of digimation. This process is still not confident enough to bring all its goods to the sales areas. When two or more characters stop and talk, the film becomes dead and decorous, though I exempt John Malkovich, bringing an Olivier-worthy snap and sarcasm to wicked Unferth.”
Nigel Andrews, The Financial Times, 14th November 2007.
“Of course, this all unfolds in a lovingly created, yet entirely synthetic computer-generated, environment, and the success or failure of the project, for some, might hinge upon the ultimate credibility of this world – in some places the “synthespians” look like rubber-faced automatons, while in others the film raises insuperable issues about the nature of stardom (if the computerised Jolie is the same, but not quite, as Jolie in real life, why use Jolie at all, other than as brand value?).”
Kevin Maher, The Times, 15th November 2007.
“Close-up, the effect is startling – particularly with Winstone, whose gruff, tough personality is perfectly matched by his muscle-bound avatar. It’s still a little airless and lacking in soul, while the sets and extras lack the vibrancy of pure animations such as Finding Nemo or Monsters, Inc. And for all of the hoopla about advances in 3D, the glasses remain uncomfortable and the in-your-face shots gimmicky. There are moments of terrific action – and Jolie is just embarrassingly seductive, even in pixelated form – but the tone is more Monty Python than The Lord Of The Rings: summed up in one sequence where furniture and ornaments are used to obscure the jangly bits of Winstone’s naked warrior. Ground-breaking and technologically exciting it may be, but while Beowulf might be significant in the history of moving pictures, it is not a picture that will move you.”
Nev Pierce, BBC, 14th November 2007.
“The only problem with the film is that the motion-capture animators haven’t quite got the eyes right, with the result that all the characters have the same oddly vacant expression. Similarly, while the majority of the characters strongly resemble the actors playing them, Beowulf seems to have been modelled on Sean Bean rather than Ray Winstone.”
Matthew Turner, View London, 15th November 2007.
“If there’s something perverse about assembling a cast like this (Brendan Gleeson, John Malkovich and Robin Wright Penn are also on duty) only to coat them in a kind of cosmetic digital wax, the actors’ personalities do come through, and the facial rendering is less creepily plastic than in “Polar Express.” In as much as Beowulf himself is capable of nuance — doubt, shame, regret, as well as courage — it’s there on screen, right along with his rippling biceps and his nasal hair. Only the dull and robotic eyes give the game away.”
Tom Charity, CNN.
“Using the same computer wizardry as Zemeckis’ The Polar Express with some additional bells and whistles, Beowulf goes a long way to correcting the dead-eyed look that made that 2004 Yuletide fable such a faintly creepy experience. Up close Anthony Hopkins’ ageing king Hrothgar, Angelina Jolie’s slinky siren and John Malkovich’s sceptical knight are pretty much perfect facsimiles that answer a lot of questions about whether a performance can be adequately rendered in pixelized form. When the frame shifts to long shot, though, the limitations of this hybrid halfway-house between animation and live-action soon become apparent, the assorted thanes and swains in Hopkins’ court having all the definition of an extra from Shrek.”
“The effects can be spectacular, as with Gollum in Lord Of The Rings. They can also turn expressive human beings into robotic creatures whose responses seem to have been programmed on a time-delay mechanism. If you saw Polar Express, the last film that Zemeckis did using performance capture, you’ll know what I mean.”
Sandra Hall, Sidney Morning Herald, 30th November 2007.
Even those critics who give overwhelmingly positive reviews can’t help taking a pop at the technology’s problem with the human touch. Anthony Quinn of The Independent finds himself resorting to a tortuous pun:
“Whatever the liberties taken, this Beowulf, seen in glorious performance-capture 3D at the Imax, makes for rip-roaring entertainment. There’s plenty of campery, inevitable when you have Anthony Hopkins and John Malkovich in the voice cast, but for all the ham-inatronic moments, it looks terrific.”
Anthony Quinn, The Independent, 16th November 2007.
I don’t want to posit these views as just petulant whining, as if the critics represent spoilt children at Christmas whose new toy isn’t exactly what they wanted, but rather as an indication that a sort of micro-spectacle adheres to the viewing of CGI cinema: the spectacular emphasis is not just on scale and sweep, but on the particulate minutiae within the frame. The frequent references to robots and automata might have been a way of recovering the film within a long history of human simulacra and technological performance, but instead they refer to “lifeless” figures with “dead eyes.”
There are, of course, significant alterations to the original story. Zemeckis’ Beowulf does not kill Grendel’s mother, but instead allows himself to be seduced into impregnating her (there’s quite a big difference), fathering the dragon that provides the fatal battle of the finale. Beowulf is played at all ages by Ray Winstone, who also plays the jester impersonating Beowulf, and the dragon he sires with Grendel’s mother. A human presence thus haunts the movements and visages of these figures, never fully visible but fleetingly detectible. This may be an analogy for how we experience a relationship with a fictional character, imaginatively inhabiting them with our own unshakeable trace. The hero is undermined by making him the cause of disaster, an arrogant fabulist, concerned with his legacy and the songs that will be sung about him (he doesn’t quite express a desire for a fancy cartoon to one day tell his story, but the self-reflexivity is laid on pretty thick), so the liberties taken can be seen to posit the original text as the mythology which the film will disambiguate by filling in the gaps that were not recorded by oral history. If earlier versions of the story printed the legend, Zemeckis (and screenwriters Neil Gaiman and Roger Avary) seems to promise, this adaptation will reveal the flawed narcissist behind the mythical warrior. By doing so, they make explicit what undergraduates have been doing for years – sympathy for the monsters and suspicion of the narrator.