Faking a human face is the holy grail of computer animation. The mostinexpressive, wooden-faced person you know is in fact a master of thousandsof facial expressions – the differences between them may be extremelysubtle, but they are real, and you are conditioned to spot them. So how cananyone hope to use a computer to mimic the flickering fluidity of a face,and get away with the deception – especially if the face is one of the mostfamous on the planet? On these grounds, at least, The Matrix Reloaded does merit some of the enormous hype that has accompanied itsrelease. It’s not the first film to attempt to simulate human faces, butit’s perhaps the first to pull it off – although they are quick about it,giving only flashes of computer-generated faces, and one of the actors inquestion is Keanu Reeves, a man who makes use – one might argue – of alimited number of the 10,000 facial expressions available to our species. The sequel to the 1999 sci-fi blockbuster The Matrix has opened to mixedreviews and will probably disappoint many of those who loved the original.But even its harshest critics will agree that the special effects areremarkable. Money was not a problem. The budget for the two sequels (athird film called Matrix Revolutions is due out later this year) has beenput at $300 million. To film the white-knuckle car chase for the climax ofthe film, for example, the film makers decided to purpose build aquarter-mile stretch of motorway inside an old US navy base. But for themost impressive images, they turned to computers. The original film won awards and a succession of imitators for itsdepictions of ”bullet time” – a visual trick that slowed the action to astandstill and allowed the camera to pan around – but for Reloaded, theeffects team went back to the drawing board. ”It was evident that wecouldn’t go any further by utilising the technology from the first bullettime shots,” says John Gaeta, visual effects supervisor. ”It was toorestrictive and too labour intensive. The concept of bullet time needed tograduate to the true technology it suggested.” That technology, Gaetasays, is ”virtual cinematography”, in which the actors, sets, locationsand even the camera exist only inside a computer. The most striking use of this in Matrix Reloaded is the scene where Neo,the computer hacker turned quasi-Messiah played by Keanu Reeves, doesbattle with chief bad guy Agent Smith and 99 of his clones. The ”burlybrawl”, as it became known on the film set, begins with close-up shots ofthe two actors and a rapidly growing number of stuntmen (whose faces werelater computer generated to make them look like Agent Smith, played by HugoWeaving). But the Smith clones just keep coming, and eventually the cameraappears to pan out to reveal a whirlwind of action. ”The last two minutes of that sequence are entirely computergenerated: Neo and the background and the bad guys – everything you see onthe screen,” says George Borshukov, who helped develop the film’s effects.”What we’ve done at the end is impossible to film.” In the midst of all this virtual action, they had to get the faces right.”Faces are really difficult,” says Anselmo Lastra, who works in thecomputer science department at the University of North Carolina. ”Lightdoesn’t reflect off a face from the surface, it exhibits what’s calledsub-surface scattering. It goes in and then reflects from underneath,”Lastra says. ”Something like that happens if a desk is varnished, and it’svery difficult to get it right.” And it has to be right. ”[Otherwise] itcan just end up looking like a cartoon,” Lastra says. To get the faces right, the Matrix film makers used a technique calledimage-based rendering. This is a relatively new development in computergraphics and different from the approaches used to create most effects seenon the screen so far. Most previous computer-drawn images effectively reproduce real-lifeobjects by building up three-dimensional models that can be rotated andmade to move. In the 1980s this produced the wire-frame style images seenin early computer games. Then in the 1990s adding colour and texture to theframes allowed computerised passengers to waddle across the deck of theTitanic and dinosaurs to walk in Jurassic Park. More recently, thousandsof creatures based on the same basic 3D shapes rampaged around Middle Earthin Lord of the Rings. Complex computer techniques have been developed tomake these models look real by adding artificial shadows and contours, butit is still very difficult to render a realistic human face in this way. Image-based rendering takes a different approach: essentiallyconstructing computer models directly from photographs of the building,person or face being simulated. For the artificial faces seen up close andpersonal in the burly brawl, this meant Reeves and Weaving sitting in frontof a semicircle of five high resolution digital video cameras. As theactors played out a range of emotions and expressions, the cameras recordedtheir performances in minute detail, down to the pores and hair follicles.Computer software then built these pictures into artificial visages thatcould be grafted on to the real-life heads of stuntmen (as in the earlystages of the scene), or combined with computer generated bodies for thevirtual two minutes that follows. It was a big task, and even the team charged with creating the imageswas unsure whether it could be achieved. ”We still thought at thebeginning that we might not be able to hold the shots that close,”Borshukov says. ”The results are staggering,” says producer Joel Silver. ”These guysdidn’t just raise the bar . . . for what is visually possible, theyobliterated it.” The results are certainly impressive but the technique isnot as futuristic or innovative as Silver makes it sound. In fact, it hasits roots in mid-19th century France. In 1851 the French officer Aime Laussedat showed that three-dimensionalmaps could be generated from photographs taken by aerial cameras suspendedfrom kites and balloons. The technique became known as photogrammetry andis now widely used in everything from remote sensing of the Amazon jungleto designing oil rigs. Similar to the way the brain builds up an impressionof depth from the flat images supplied from the left and right eye, mapmakers and scientists find clues such as lines of perspective inphotographs to build up three-dimensional models. Filmmakers were slow to catch on. It took a 1997 film called theCampanile Movie from Borshukov and colleagues including Paul Debevec to setcreative tongues wagging. The film used still photographs and computeralgorithms to generate an aerial fly-by of the University of California,Berkeley campus. As word of the film spread, the Berkeley team were snappedup by Hollywood and quickly used their image-based techniques to create,among other effects, digital backgrounds for the bullet time scenes in thefirst Matrix film. The technique is not quite as simple as it sounds, and the capturedimages need plenty of computerised massaging to render them lifelike. Toget the tricky effects of reflected light off a face just right, forexample, the Matrix Reloaded team used software called Mental Ray developedby a German team. And Borshukov is just one of the computer scientists whoworked on the project about to publish a string of scientific papersdetailing how they got hair, clothes and skin to look realistic. Their apparent success in harnessing virtual cinematography will fueltalk about its potential uses beyond the entertainment industry. The USmilitary is interested in developing combat simulations and film makershave already warned politicians (or should that be tipped off?) about thepotential for mass deception. Still, actors such as Keanu Reeves willprobably keep picking up their $20-million pay cheques for some time yet. ”If there are things they can film for real then it’s still cheaper forthem to do that,” Borshukov says. And, as a result, the glitches in thereal-life matrix will continue. ”You can actually see that it’s not him[Keanu] in one of the shots,” he says of the burly brawl’s early stages.”But only if you know what his stunt double looks like.” — Â