Computers, like humans, can learn. But when Google tries to fill in your search box based only on a few keystrokes, or your iPhone predicts words as you type a text message, it’s only a narrow mimicry of what the human brain is capable.
The challenge in training a computer to behave like a human brain is technological and physiological, testing the limits of computer and brain science. But researchers from IBM say they’ve made a key step towards combining the two worlds.
The company announced on Thursday that it has built two prototype chips that it says process data more like how humans digest information than the chips that now power PCs and supercomputers.
The chips represent a significant milestone in a six-year-long project that has involved 100 researchers and some $41-million in funding from the government’s Defence Advanced Research Projects Agency, or DARPA.
IBM has also committed an undisclosed amount of money. The prototypes offer further evidence of the growing importance of “parallel processing,” or computers doing multiple tasks simultaneously.
That is important for rendering graphics and crunching large amounts of data. The uses of the IBM chips so far are prosaic, such as steering a simulated car through a maze, or playing Pong.
It may be a decade or longer before the chips make their way out of the lab and into actual products. But what’s important is not what the chips are doing, but how they’re doing it, says Giulio Tononi, a professor of psychiatry at the University of Wisconsin at Madison who worked with IBM on the project.
The chips’ ability to adapt to types of information that it wasn’t specifically programmed to expect is a key feature.
“There’s a lot of work to do still, but the most important thing is usually the first step,” Tononi said in an interview. “And this is not one step, it’s a few steps.”
Technologists have long imagined computers that learn like humans. Your iPhone or Google’s servers can be programmed to predict certain behaviour based on past events. But the techniques being explored by IBM and other companies and university research labs around “cognitive computing” could lead to chips that are better able to adapt to unexpected information. IBM’s interest in the chips lies in their ability to potentially help process real-world signals such as temperature or sound or motion and make sense of them for computers.
IBM, which is based in Armonk, New York, is a leader in a movement to link physical infrastructure, such as power plants or traffic lights, and information technology, such as servers and software that help regulate their functions. Such projects can be made more efficient with tools to monitor the myriad analogue signals present in those environments.
Dharmendra Modha, project leader for IBM Research, said the new chips have parts that behave like digital “neurons” and “synapses” that make them different than other chips. Each “core,” or processing engine, has computing, communication and memory functions.
“You have to throw out virtually everything we know about how these chips are designed,” he said. “The key, key, key difference really is the memory and the processor are very closely brought together. There’s a massive, massive amount of parallelism.”
The project is part of the same research that led to IBM’s announcement in 2009 that it had simulated a cat’s cerebral cortex, the thinking part of the brain, using a massive supercomputer. Using progressively bigger supercomputers, IBM had previously simulated 40% of a mouse’s brain in 2006, a rat’s full brain in 2007, and 1% of a human’s cerebral cortex in 2009.
A computer with the power of the human brain is not yet near. But Modha said the latest development is an important step. “It really changes the perspective from ‘What if?’ to ‘What now?'” Modha said.
“Today we proved it was possible. There have been many sceptics, and there will be more, but this completes in a certain sense our first round of innovation.” — Sapa-AP