The computer screen on your desk and the television in your lounge could become things of the past, says Jason Thomas
Imagine sitting in a bus watching Star Wars Episode I: The Phantom Menace on what appears to be an Imax-size screen – courtesy of the visual equivalent of a Walkman built into your glasses.
It may sound like science fiction, but the technology is almost with us now. Virtual retinal display (VRD) scans images directly on to the retina, without the intervention of anything so crude or conventional as a TV screen, computer monitor or even the latest active matrix liquid crystal display panel.
The result is an almost life-like three- dimensional image – still or moving – which appears to be about a metre away. It can either replace or augment the user’s normal vision, depending on the application. Users say it’s a truly three-dimensional image with none of the haziness and lack of clarity of holograms.
It’s already being used in prototype form by the United States military in head-up displays and, its inventors say, it’s only a matter of time before it finds its way into cellphones, operating theatres, building sites, even glasses for the ultimate virtual reality experience. In fact they reckon there’s no reason why eventually it shouldn’t all but replace conventional methods of personal display for about the same costs.
Developed by Microvision, VRD projects a very low-power, electronically encoded, rapidly scanning laser beam through the eye’s lens directly onto the retina, stimulating individual neurons at the back of the eye.
It’s been in development for the last six years, but the recent incorporation of ultra- violet laser technology has dramatically reduced power consumption and size. This means genuinely affordable and lightweight consumer devices could be on the market before the end of next year.
“The increased sensitivity the eye has to violet light means we can use less power than with red or green light,” said Tom Lippert, principal scientist at Microvision. “It also means we can eliminate all of the fibre optics while still increasing efficiency. Suddenly, the entire photonic system becomes the size of a sugar cube.”
In case you were wondering, the company does stress the safety requirements of shining a laser beam directly into the eye. “We have to adhere to the international safety standards,” said Lippert.
To put it into perspective, it’s estimated that even production military devices will emit about 1 000th of the power of hand- held laser beams used at nightclubs a few years ago.
Moreover, military applications will emit in the region of 200 micro-watts, which is just below the accepted safety level. This is because the images they produce need to be visible in situations with much brighter ambient lighting. Consumer devices will emit only 30 nano-watts because the need for high brightness and contrast isn’t so great. There is nothing inherently dangerous about laser beams, only the amount of concentrated power they produce. It’s far more dangerous to stare at a 60 watt light bulb.
Because there is no screen, the resolution of the image produced is limited only by the diffraction and optical aberrations of the light source – which, because it’s decoupled from the image producer, can be almost any kind of source.
This is because the image is produced by a single scanning beam and not discrete light sources, as in the case of a computer monitor. Lippert says the maximum resolution Microvision has managed to produce is 5 120 by 4 096 pixels in a field of view of 100 degrees.
This is still not good enough for what’s called “full immersion” in a video game, for example, which would require a field of view in the region of 140 degrees and would need to produce a variable resolution depending on where the viewer’s eyes are focused – in much the same way as our own vision works. Nevertheless, it’s far better than anything else so far achieved.
In fact the system is a great deal more suited to the way our vision works than traditional methods of display. After all, images are projected onto our retinas all the time, and the intervention of an intermediary screen really only gets in the way.
A VRD system consists of four primary components: drive electronics, light sources; scanners; and optics. The drive electronics receive and process signals from an image source (a computer, a video camera, or any video output). The processed signals contain information that controls the intensity and mix of colour and the co- ordinates to position the individual picture elements (pixels) that comprise the image.
The VRD uses a very low power light source to create and convey a single pixel at a time through the pupil to the retina. With colour images, three light sources – red, green and blue – are modulated then merged to produce a pixel of the appropriate colour.
Horizontal and vertical scanners “paint” an image on the eye by rapidly moving the light source across and down the retina. Finally refractive and reflective optical elements project the rapidly scanning beam of light into the viewer’s eye through the viewer’s pupil and on to the retina to create a large virtual image.