How Special Effects Work #2: Virtual actors are on the way.


In some of my earlier writing about special effects, I regularly found myself banging a particular drum, and eventually had to stop myself getting repetitive. In my research on special effects, I continually found practitioners, and some critics espousing a belief that virtual actors were soon going to reach such a perfect state of simulation that spectators would be unable to tell them apart from the real thing. The following comes from a paper I wrote for Stacy Gillis’ collection The Matrix Trilogy: Cyberpunk Reloaded:

It would be all too easy to fall for the suggestion that the age of the synthespian is imminent, and that soon human actors will interact with computer-generated co-stars without the audience realising which is which. Will Anielewicz, a senior animator at effects house Industrial Light and Magic, promised recently that “Within five years, the best actor is going to be a digital actor”. The apotheosis of an animated character into an artificially intelligent, fully simulacrous figure indistinguishable from its carbon-based referent is technically impossible, at least in the foreseeable future, but visual effects are not definitive renderings of a character or event but indicators of the state-of-the-art offering “a hint of what is likely to come” (Kerlow) in the field of visual illusions in the future. It is understandable that such a competitive industry needs to maintain interest in the potential of its products, but the mythos of the virtual actor has infiltrated the Hollywood blockbuster in recent years […]

For the record, I regret the phrase “technically impossible”: I think the barriers to producing perfect synthespians are not primarily technical, but cultural and economic (if there was enough demand, money would have been found for even more research and development even sooner). I was using the “mythos” or concept of the virtual actor, the belief in inevitable progress, as an example of the kind of teleological argument that I wanted to unpick. It wasn’t hard to find it surfacing in other places. This from another essay in James Lyons and John Plunkett’s Multimedia Histories: From the Magic Lantern to the Internet:

Kelly Tyler, of NOVA Online, a science-based website, has identified the photorealistic human simulacrum as “a new digital grail.” Damion Neff, an artificial intelligence designer for Microsoft video games has called it “the Holy Grail of character animation.” In his keynote address at the 1997 Autonomous Agents Conference, Danny Ellis listed the emotionally intelligent virtual actor as one of four “holy grail” in the field. In May 2003 John Gaeta, discussing his visual effects work on The Matrix Reloaded in the Los Angeles Times, referred to a believable digital human as “the holy grail” of our world. It seems that the Grail analogy has found some currency, at least amongst those working in the relevant creative industries. This frequently uttered analogy sums up the suggestion that technologies of visual representation have been working inexorably towards a final goal, but they might also inadvertently hint that such a goal is essentially elusive.

The development of special effects over time suggests scientific progress as motion towards a logical conclusion, their development effected by a series of refinements and improvements to existing mechanisms. Certainly, computer-generated imagery, with its increasing photographic verisimilitude permitted by faster processing speeds and more efficient rendering software, appears to be advancing at a quantifiable rate, implying a final destination of absolute simulation, a point where a digital human being can be rendered to a level of detail indistinguishable from actual flesh and bone, and possessing enough (artificial) intelligence to be a star offscreen instead of just a hyperreal cartoon upon it.

So, how can this teleology by questioned? How do we construct a more continuist approach to historicising these spectacles in the face of such persuasive technological progress? By drawing the focus away from the dazzling verisimilitude of illusory technologies and focusing on the conceptual questions which underpin their fascinating surfaces. We can observe antecedents of the virtual actor and note that the same spectacular strategies, prompting the same ontological questions, were in play.

That’s enough self-promotional recapping. I hope you get the message that, whatever actual developments there might be in imaging technologies that can simulate properties of human figures, there is another narrative here. As Lev Manovich put it in The Language of New Media, “throughout the history of computer animation, the simulation of the human figure has served as a yardstick for measuring the progress of the whole field.” So, just to make sure you stay interested, every now and then some industry insider pipes up and tells you that you’re just months away from being fooled into believing in a bit of CGI as a living, breathing person. So, here we go again. Only this week, Image Metrics have announced that they’re very close to the Holy Grail. Take a look at this video of actress Emily O’Brien:

[Find out more about the process, or watch a higher resolution version of the video here.]

Pretty impressive stuff. As you can probably tell, her face has been digitally captured and mapped over her actual face, not because it’s a useful thing to do, but because it puts the digital and the flesh versions of Emily close enough that we can compare them. Presumably, the real benefits of the process will be seen in applications that can map her face onto her stunt double, or onto another actress if Emily, heaven forbid, suffers a terrible accident halfway through shooting a very expensive blockbuster movie. Or it might help our already beloved stars transcend the limits of their own bodies. Here’s the original post I spotted on IMDB:

1 January 2009 1:30 AM, PST

“Silicon Valley is on the verge of producing sophisticated software that will allow motion picture companies to create actors on a computer who are visually indistinguishable from real people, San Jose’s Mercury News reported today (Thursday). In the words of the newspaper, which closely follows the sofware industry, when software engineers finally achieve what it calls “the holy grail of animation,” stars would be able to “keep playing iconic roles even as they aged past the point of believability like Angelina Jolie as Lara Croft or Daniel Radcliffe as Harry Potter.” Rick Bergman, general manager of AMD’s graphics products group, told the Mercury News that his company “is getting real close” to producing computer-generated actors that will look identical to real human beings.”

Does anyone honestly believe that there is a call or a need for technologies for sustaining the shelf lives of these people? Of course not – it’s just a distracting excuse to avoid the real explanation, which is closer to “we’ve got this cool gadget, and we really want to show it off.” I remain skeptical about claims of impending perfection in virtual actors not because the technology isn’t impressive, but because the grand deterministic narrative  of progress is overriding the reality of the situation. One savvy poster over at the Blu-Ray forum says it all in a manner that needs no elaboration from me:

Oh, dear lordy, they (meaning, JU) posted the article here, too…

The same exact article (with different star insertions, hence the ’01 dated Tomb Raider ref) that studios plant once a year by the clock, in the hopes they can finally get that CGI resurrected dead-Karloff-and-Chaney Frankenstein vs. the Wolfman movie going again, because it sounded like such a neat idea back when Forrest Gump shook hands with JFK–
Twelve years, Final Fantasy, and Beowulf later, and they’re STILL trying to sell us on “With virtual actors, we could bring George Burns back from the dead, and he’d look so real!


In case you needed more convincing, I dug up this old article from The Boston Globe, 23rd May 1999. There’s that grail again:

It sounds like a plot from a sci-fi pulp, or an old B movie: A snaggle-toothed scientist toils in the laboratory, perfecting his creation. A touch-up here, a tiny tuck there. But this is not some green-gilled monster from the house of Dr. Frankenstein; it’s a realistic human with shiny hair, glittering teeth, and liquid eyes. The pressure is on to beat other genetic geniuses racing to create human clones. Suddenly, there’s a burst of energy – and voila! – the model comes to life. Blink your eyes, and it’s Marilyn Monroe. Blink again, and it’s James Dean.

This scenario isn’t as far-fetched – or as far off – as it might once have seemed. In this case, the scientists in question are digital doctors: computer programmers developing the software needed to create a photorealistic digital actor, or ”synthespian.” Special-effects wizards have already created convincing digital dinosaurs and dolls, aliens and ants, stuntmen and superheroes. And the two biggest box office draws of the moment – The Mummy and a certain prequel that unfolds in a galaxy far, far away – showcase digital creatures.

So why not digital humans? Why not virtual stars?

”The digital actor has been the Holy Grail forever, since the dawn of 3-D computer animation,” says Brad Lewis, executive producer of visual effects and vice president of Pacific Data Images, the firm that gave life to insects in Antz. ”There’s always been someone trying to do a hand or a face or some aspect of a human being that looked real.”

Some say realistic digital humans will be on-screen within the next five years. These synthespians could be brand-new characters or reincarnations of old legends, long cold in the grave. One Hollywood producer, for instance, is planning a film that would resurrect martial-arts phenomenon Bruce Lee; another is reportedly working on a digital version of an aging screen star (rumored to be Marlon Brando), restoring his youth and making him a contender for a manly role. Another producer got permission to re-create the late George Burns in a film called The Best Man, and a California firm, Virtual Celebrity Productions, has obtained the rights to digitally reproduce a handful of stars, including Marlene Dietrich and W. C. Fields.

wcftorsoHang on. Did I read that correctly? W.C. Fields? I can understand how Dietrich’s pictorial stillness might translate relatively easily to a digital avatar, but is there anyone less suited to a CGI resurrection by the pixel pixies than W.C. Fields?! Well, they gave it a go a few years back with the Gepetto software (nice puppet reference, there) for real-time 3D animation. I can’t say the results were ever going to replace the memory of the real thing, but that’s not really the point. These rumours and promises build anticipation, expectation, and a sense that something is at stake. The defiance of death, age and human inadequacies is just a cover story for the real business of special effects.

Look at the Curious Case of Benjamin Button. The stated aims of the film, in which Brad Pitt’s character ages “backwards”, might be to integrate visual effects so seamlessly that they don’t distract from the character-driven, Oscar-baiting emotional truth of it all, but there’s no getting away from the fact that, by centralising the concept of a spectacular body like Benjamin’s, a magnet for diegetic and extra-diegetic curiosity, the film can’t help but draw attention to the visual effects used to achieve the concept’s visualisation. Pitt’s body becomes a laboratory for all kinds of tricksy bits of CG animation and performance capture, and there’s a complex connection between the fascinated gaze that attaches to the character’s condition, and the one that fixes on the image of a movie star transformed into a recognisable but fundamentally changed series of physiques by means of cinematic tricks. When Benjamin strikes muscleman poses in the mirror, it’s as much about technological display as it is about his own narcissistic enjoyment. The discourses around the futuristic capabilities of digital imaging technologies shape expectations about how a particular special effect is to be viewed and appreciated. There’s an element of promotional hype in play, but by providing a set of measurable goals and projected rationales, the impression given is that special effects are contributing to a worthwhile cause with a pre-determined path, instead of offering random and occasional attractions; it all makes sure that you stay interested, and keep buying a ticket for the next attraction, and then the next. Special effects, like movie stars, exist intertextually – they provide reassuring continuities: we are expected to keep watching how they develop from film to film, how each is an improvement upon the last – so it makes sense that a certain weight of expectation should hand on the predicted hybrid of a special effects movie star, an all-digital, perhaps artificially intelligent character actor who passes for flesh and blood before your very eyes. But to truly deliver on that promise to deceive would defeat the object of the special effect, which was to attract and hold that multi-focus spectatorial gaze. What’s the use of a spectacular attraction if nobody notices it?


See also How Special Effects Work 1: The Sandman & How Special Effects Work 3: Now That’s Magic…

Lisa Bode, ‘“Grave Robbing” or “Career Comeback”? On the Digital Resurrection of Dead Screen Stars’, in History of Stardom Reconsidered. Edited by Hannu Salmi et al. (Turku: International Institute for Popular Culture, 2007) Available as an eBook at

12 thoughts on “How Special Effects Work #2: Virtual actors are on the way.

  1. Pingback: How Special Effects Work #2: Virtual actors are on the way. | 3D Animation And Graphics Blog

  2. Pingback: Virtual actors are on the way. |

  3. Perhaps your understanding of this issue would be clearer by observing that Hollywood’s efforts to present highly photographic visual images of people is distinct from the work in artificial intelligence that may one day be applied to automate these images.

    Computer-generated video displays of attractive and realistic people, whether appearing in feature movies, animation shorts, video games, or even in live appearances have yet to become “mainstream” to the general public. The use of this type of animation in movies sets it apart from our everyday experiences.

    Perhaps the “real” Holy Grail is when we start to have everyday experiences of interaction with virtual actors and characters, and these actors are nearly indistinguishable from people. Then, we will look past the appearances, and focus more on the intelligence – feeling capabilities, as standards of achievement by the animation people behind the characters.

    For example, my company has an agreement to work with the character WC Fields, which you mentioned above, and we have the 3D assets to proceed on our real-time platform. Making this character match the appearance of this character is just one of the issues. Making him sound (through the use of trained impersonators) like WC is another requirement. Giving him the correct attitude is another requirement. Making the experience of interacting with him possible in the real world is another. Learning all the nuances of the real WC is another. And the list goes on. So making his pixels work is a relatively minor issue when we consider what we are doing with this animation accomplishment.

    Your observations seem pretty limited to appearance, which is just a fraction of the mission of creating virtual characters. And since the mission is so much more complex, it is going to require much larger budgets, at a time when dollars are tight.

    It seems a little unlikely that this endeavor will become a priority until we have the time and money to experiment and play, in an environment where programming the “artificial” intelligence of the character can get some attention. We certainly have the hardware and software now, to make them look good, even running on high-end laptops. Can we make them “act?”

    And would they then become a product? A service? A useful agent?

    I wonder.

  4. Hi, Gary. Thanks for your message – it’s good to hear from an expert practitioner, and your point is well made that these are technologies that exist independently of mainstream cinema. Hollywood just appropriates them once they develop to a point where they suit the pre-existing imperatives of narrative film. I think we’re in agreement that this is not all about the pixels, or about the surface resemblances of digital humans (though I think that the celebration of photorealism fostered by promoters of this kind of film-making invites a kind of hyper-critical spectatorship that looks very closely at the finer points of CGI), but about the way people view or interact with those characters. If the level of simulation is escalated to a point where viewers don’t know that one of a group of characters is computer-generated, then the rules of that interaction will be completely altered. But I don’t think that’s what you’re aiming for, and that’s why, as a film scholar, I’m interested in why such claims would have been made for the advent of perfect human simulacra.

    I’d love to hear more about the WC Fields project, and I’m glad that your aim is not to “simulate” his presence. But I wonder why he was chosen out of all the stars they could have picked. You can “learn the nuances”, catalogue the tics and performance styles that make him special, but you might find that the things which make him interesting are the very things that cannot be synthesised. Sure, he created an onscreen persona that was probably quite different from his private self (nobody could be that misanthropic and still show up for work on time), and the attraction of movie stars is largely built on the paradox that these people are both extremely present and absent to us – these are people who we will probably never meet, but to whom we have the kind of intimate access (close-ups for example) that we only get from people we are very close to in the real world; these are aspects that find interesting parallels in virtual actors, which create the conundrum that they seem to display signs of life while not possessing a backstory or a private life. The way viewers engage with the kinds of live animation that you yourself produce is dependent upon that dual focus, the fun of something something vital and vivid created through machines which we know to be insensate. This is precisely why my comments in this post are directed at the myth of absolute simulation – as soon as you remove that element of suspended disbelief and suggest that a digital character in a film is perfectly assimilated with its human co-stars, you lose the whole point of virtuality. I’m glad that you refer to your work as “puppetry”. I’m sure you know that the aim of puppetry as practised in many diverse cultures around the world for thousands of years (if that’s not too grandiose a statement!) has never been to deceive the spectator into believing that the puppet is a living human, but to tread the boundary between animate and inanimate. I believe that special effects (though I know that’s not really the line of work you’re in) also depend upon, and benefit from, the viewer’s oscillation between belief and disbelief, sometimes balancing between those two states at once.

    But, aside from my wordy justification for my own argument, I guess I could just say that virtual actors will not be mistaken for the real thing because people are really, really good at telling the difference. We’re evolved to read facial expressions at a very sophisticated cognitive level. The better the simulation, the better the spectator gets at discerning the telltale traces of simulation.

  5. Hi Dan,

    Thanks for your reply.

    First, you might be interested in this article regarding the technical aspects of Benjamin Button – and how the digital shooting was treated to affect grain, etc.

    Studio Daily is a good source for behind-the-scenes technical information.

    It takes a pretty good eye to really identify what is CGI in Button and what is not. As you mentioned, people are really, really good at telling the difference – well, this movie might surprise you. Computer graphics and facial motion capture were used a lot, to bring Pitt’s acting choices to the old man’s face, in the form of those micro-expressions you mentioned.

    And that is why I like certain actors – I have become practiced at seeing and watching for the micro-expressions that let me know what the character is thinking. It started with Clint Eastwood movies for me – those close-ups of eyeballs.

    So when CGI artists get really good at capturing the tiny, subtle motions of the real actors’ facial expressions, people will finally be convinced. Button is a great example of that starting to happen.

    Regarding WC Fields, the owner of Global Icons represented Fields’ celebrity rights for his family, mainly his son Ron. I can’t say for sure, but I think they chose him because they were able to get permission and cooperation from the family, which is also willing to cooperate with me, should we get a chance to move this project forward on our software platform, ToonMX.

    Can we capture the micro-expressions with ToonMX? Perhaps not all of them, but maybe a few that would help to define this character.

    You might also be interested to know that facial animation is now being used for another mainstream real-time performance. Dreamworks recently opened “Shrek the Musical” on Broadway, and is using real-time motion capture to animate the face of the Magic Mirror at live shows. While the 3D art is much less detailed than a photorealistic actor, one can see some finely-tuned micro-expressions there that bring the character to life on the screen.

    I’m mentioning real-time live animation here, and your readers should also understand that the feature film uses of CGI and face animation allow directors and artists to pour over the results of the motion capture, lighting and 3D modeling for months, in order to get their results just right. Here we are not talking about machines, but about teams of artists who collaborate to bring these Oscar-winning performances to the big screen, now in digital formats rather than film.

  6. Thanks for the link, Gary. There are a couple of really good techie articles on Benjamin Button at VFX World, which you may have seen already:

    The second one is an interview with David Fincher. Here’s what he has to say:

    “I think that the greatest lesson to be learned from this was that it wasn’t about making a real human; it was about making a character. And I think the great news about Benjamin is that he is a fully realized character in a movie, and that there are all these little moments that allow you to understand and empathize with what’s going on in his little noggin, even if he’s not saying a lot and even if he can’t move a lot. You get who he is. And I attribute that to technology in service of an actor: very specific and very interesting and very varied ideas about who this character was and what he was going through.”

    I might blog about Button more fully at a later date, but even if the VFX crew were set the task of seamlessly integrating Brad Pitt’s performance with a CG chassis, there’s no getting away from the knowledge that something has been done to alter fundamentally Pitt’s body. Let me be clear on this, though – I’m not pointing this out as some kind of inherent flaw in the technology. I think it’s a positive facet of the medium that viewers can be engaged with a narrative even while part of their mind is processing the operations of the medium itself.

    Is there an ethical question about using WC Fields’ image? I’m sure the legal quandaries have been resolved in the wake of the Astaire Bill. The reason I was surprised that he had been chosen (the Klaiser-Walczak Marilyn Monroe simulation seemed to make more sense to me), was that his screen persona relied so heavily on his unpredictability and his befuddlement at modernity. Those micro-expressions you mention are not created by a director, perhaps not even controlled by the actor, but something so subtle and spontaneous that there may be no way of classifying them or reducing them to a limited set of options. But I’m always prepared to be persuaded, and I’ll be interested to see how you get on with Fields.

  7. Pingback: How Special Effects Work #3: Now that’s magic… « Spectacular Attractions

  8. Pingback: And the winner is… « Spectacular Attractions

  9. This article fascinates me. The mention of ‘make believe’ human beings being digitized. The film, starring Brad Pitt as Benjamin Buttons, is one example. The article speaks to morality within special effects technology and the motivations of the digital artist. Walt Disney, other than entertaining us, has left a trail of evolutional bread crumbs. The article speaks of theatrical fantabulations, (my word) from Shrek to the cinematic resurrection of the deceased through digital portrayals. My technologically uneducated self sees these apparitions as miracles that have evolved according to genetic laws built into the alchemy of the human brain and from its link to the eternal, imaginative capability of the human mind. Our history of art in religion’s stories presents ‘why we should do this, to get that, which in my opinion is a very practical rational that works in real time. As a fantacist of sorts, one who utilized a simple type of special effect over the years as a comic ventriloquist entertainer, I have always asked, do we, not only as a theatrical, but we, all of us, do we follow a predetermined course in the development of our work beyond immediacy? Whether as a random attraction, or an artistic homemaker doing today’s chores, do we serve a morality of purpose beyond the elusiveness of the present moment.? I believe so. In this article, about the digital world of technological development, I’m prompted to ask what is the purpose of refreshing our senses by gazing into a digital mirror, an infinity of magic, a world of make believe, or for that matter, why the very creation of the thing itself? As to the answers, I think they are as many as there are human beings on earth or stars in the sky.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s