/ 19 September 2010

3D games enter a new generation

The launch of PlayStation Move and Xbox Kinect signals a revolution in the way that we interact through the internet.

Between Pac-Man, the first video game superstar, who zipped into life in the United States 30 years ago this month, and a game such as this year’s Red Dead Redemption seems to lie an eternity in the medium’s evolution. Designer Toru Iwatani said he wanted Pac-Man to be “the simplest character possible, without any features”; whereas the team of several hundred responsible for fashioning the sprawling Western epic have a life-like protagonist in reformed outlaw John Marston. But in certain fundamental aspects, such games have barely changed at all — because the action still takes place on a 2D screen, using a controller like a joystick.

That is now changing, and within the next five years any firm concepts of what constitutes a computer game will require radical revision. In fact, in time for this Christmas, there is the launch of new motion control systems for Sony’s PlayStation 3 and Microsoft’s Xbox 360 consoles. Although the Nintendo Wii has used motion tracking for four years, the new devices are noticeably more sophisticated: PlayStation Move still involves a hand-held controller, but a player’s movements can now be tracked in 3D space with incredible precision using its own “Eye” camera placed near the screen. The Xbox Kinect is a wholly hands-free device that employs a camera and infrared depth sensors, translating any movement into game action. Both systems also feature microphones that allow voice input.

Created by British design pioneer Peter Molyneux at Lionhead Studios in Surrey, a demo for Kinect called Milo and Kate suggests what the future holds. Milo is a virtual boy with whom players interact using voice, gesture and facial expressions. A sophisticated artificial intelligence system lets the character learn words and actions from the player as well as interpret emotions in the user’s voice. Milo can apparently even recognise different people, introducing himself to anyone he hasn’t met before.You can teach him about art by drawing a picture and holding it up to the Kinect camera and he will then copy your masterpiece. In one scene, Milo is in a garden and asks whether he should stamp on a snail; his moral future is in the hands of the player.

A computer that can see and hear you
“What Kinect gives you is a computer that can see and hear you, and that’s pretty fundamental,” says Andrew Oliver, chief technology officer of UK developer Blitz Games Studios and an expert in 3D and motion technologies. “It’s really good as a natural user interface but it’s up to designers to work out how we can interpret that.”

In this future, there will be detective games in which players verbally interrogate computer-controlled characters — who themselves respond differently to an aggressive tone or placating physical gestures. Realistic relationships in games will emerge, with players having to charm intelligent artificial beings. “Your reactions will be directly perceived by the system, expanding your input capabilities,” says Richard Marks, who leads PlayStation’s research and development department. “What you’re doing with your body and your face will actually matter. You’ll be able to have a very rich communication with the game and with other players.”

Bringing the likes of Milo to life will also be 3D technology. Sony has already launched a range of 3D titles for the PlayStation 3 that are compatible with its latest Bravia 3D TV sets, while companies such as NVIDIA are creating graphics cards for PC users that allow players to experience a huge range of titles in three dimensions.

While 3D at the cinema still feels like something of a gimmick, 3D technology brings with it tangible benefits in gaming. Andrew Oliver cites the example of a demo of a game called Crysis 2, a first-person shooter due for release next year that will be entirely playable in 3D. “The developers showed a screenshot of a forest and said, ‘spot the sniper’, and in 2D you couldn’t see anything,” he says. “But with the same shot in 3D, you could instantly see him lurking in the foliage. And, that’s important — that’s sort of the difference between life and death!”

Such systems require viewers to wear “active shutter” glasses to generate a 3D image. These are typically controlled by an infrared or Bluetooth transmitter that sends a signal that allows the glasses to alternately darken over one eye, and then the other, in synchronisation with a TV display that runs an image from two different perspectives — creating a 3D stereoscopic effect. But Nintendo recently announced its 3DS console, due next year, that dispenses with glasses altogether through a process known as autostereoscopy, while Sony has demonstrated a prototype that creates a totally three-dimensional holographic image that viewers can walk around and see from any angle. The device can be plugged into a PC — allowing it run holographic 3D games. During a demonstration at the recent SIGGRAPH exhibition in Los Angeles, Sony showed off an image of a woman’s head that reacted to the viewer’s hand movements, orientating itself accordingly.

Stuff of science fiction
Researchers at Tokyo University, meanwhile, are developing touchable holograms. Using ultrasonic waves to give the sensation of pressure, users are able to feel 3D holographic characters running about on their hands, or touch holographic rain drops as they fall. This still feels like the stuff of science fiction, but such technology will surely soon filter into the market.

Before that point is reached, though, through a combination of motion controls and 3D display technology, games will appear in which, for example, a virtual sword can be projected into the player’s room to “grab” and wield. “We’re applying this technology to games like shooters,” says Marks. “In our research we’ve looked into head tracking, so players can actually move their heads to peek around corners. And with a 3D display, the viewpoint can also be modified and tilted as you move your position, so you get a really strong motion parallax — there’s a very convincing feeling of 3D from lots of different cues.”

Using our bodies to control games may in fact be only the beginning of our transmogrification into walking joypads. The next step involves mind control. Already there are several consumer headsets that use electroencephalography, or EEG, to monitor brainwave patterns and which allow gamers to control elements of the action through the power of their thoughts. Priced at $199, the Mindset — from the futuristically named NeuroSky — is an EEG headset that comes bundled with its own game demo, NeuroBoy. “You play a boy with telekinetic powers,” says the company’s Tansy Brook. “You still use traditional mouse and keyboard controls to move and select objects, but there are a number of features that you have to use your mind for: you can relax to levitate an object, or concentrate to set it on fire — which is very popular.”

A key problem with brain-computer interfaces in the past was that it took a long time to calibrate the sensors to pick up on each user’s neural activity, but companies such as NeuroSky and its rival Emotiv have solved this with proprietary algorithms that spot patterns of activity.

NeuroSky has a new headset due later this year at $99, and is already working with game makers including Sega and Square Enix. The latter has produced a demo of a shooting game, Judecca, in which players have to focus their thoughts to see invisible demons. This is likely to be how mind control slips into the marketplace — as an added feature for conventional games.

Peripherals that can read our physical states are also being introduced. A forthcoming game titled Innergy from French publisher Ubisoft involves a biofeedback sensor that clips on to the player’s finger and checks their stress levels. Meanwhile, Valve, the US developer behind the massively successful Half-Life series of shooters, is experimenting with biometrics and ways of analysing player pulse rates to redevelop the gaming experience.

New interfaces
“The last 20 years have all been graphics, graphics, graphics,” says Nam Do, the co-founder of Emotiv. “The next five years will be about new interfaces — and brain-computer interfaces will play a major role in giving users new experiences. Harry Potter will perform true magic in a game, it won’t be about pushing buttons. Headsets can also read the player’s emotions, bringing in a totally new dimension. Games will be able to constantly measure your level of excitement, boredom or fear, and tailor the action to make it challenging and exciting all the time.”

Soon the distinction between the virtual and real may disappear, too. Developers working with consoles like Kinect as well as with camera-equipped smartphones are creating augmented reality games that merge computer graphics with real-life footage. In TagDis, for example, you can leave virtual graffiti on real walls. “As you leave more ‘tags’ you become the king of an area and earn points as others view your work,” says Lester Madden, an alternate reality game (ARG) researcher. It is his belief that a technology called “natural feature tracking” will be a huge element of ARGs in the future. “Developers could use it to have hordes of zombies walking up your stairs and though your doors or climbing through your windows,” he enthuses.

Beyond even this lies an era of what is known as pervasive digital entertainment — in which, once we’ve signed up to a gaming experience, we’ll never truly leave it. The action will take place in our living rooms, on the street and on social network sites. And, via the omniscient interconnectivity of the web, all our friends will be there too.

This is the future that many game pundits are expecting. But it won’t just be about what we now consider to be games, and it’s not all about cool technologies such as 3D displays. What is emerging in all areas of consumer technology is a process known as gamification — from global-positioning software to financial services websites, everything is becoming more game-like, because we as a species like to play.

Last month, mobile entertainment company Booyah launched InCrowd, a location-based social game for the iPhone that asks users to “check in” when they visit real-world places, and hands out points for sharing experiences.

Employing the new Facebook feature Place, which lets you see where your friends are, it speaks of a future in which shopping becomes a multiplayer “deathmatch” against rival players. Together with services such as FourSquare and Gowalla, this is just the beginning.

“With gamification, we’ll find boring things fun — like paying our taxes, getting regular checkups, buying groceries or checking the weather,” says gamification expert Gabe Zicherman.

“When fun becomes a principal design requirement or objective of all kinds of industries — from healthcare and government to finance and transportation — the effect on our happiness will be unprecedented.”

This is a utopian vision — a vision of all-encompassing, multi-sensorial gaming — that to some will sound more terrifying than fun. It’s certainly a long way from Pac-Man. – guardian.co.uk