IBM reveals 'brain-like' chip with 4,096 cores

The human brain is the world's most sophisticated computer, capable of learning new things on the fly, using very little data.

It can recognise objects, understand speech, respond to change.

Since the early days of digital technology, scientists have worked to build computers that were more like the three-pound organ inside your head.

Most efforts to mimic the brain have focused on software, but in recent years, some researchers have ramped up efforts to create neuro-inspired computer chips that process information in fundamentally different ways from traditional hardware. This includes an ambitious project inside tech giant IBM, and today, Big Blue released a research paper describing the latest fruits of these labours. With this paper, published in the academic journal Science, the company unveils what it calls TrueNorth, a custom-made "brain-like" chip that builds on a simpler experimental system the company released in 2011.

TrueNorth comes packed with 4,096 processor cores, and it mimics one million human neurones and 256 million synapses, two of the fundamental biological building blocks that make up the human brain. IBM calls these "spiking neurones." What that means, essentially, is that the chip can encode data as patterns of pulses, which is similar to one of the many ways neuroscientists think the brain stores information. "This is a really neat experiment in architecture," says Carver Mead, a professor emeritus of engineering and applied science at the California Institute of Technology who is often considered the granddaddy of "neuromorphic" hardware. "It's a fine first step." Traditional processors -- like the CPUs at the heart of our computers and the GPUs that drive graphics and other math-heavy tasks -- aren't good at encoding data in this brain-like way, he explains, and that's why IBM's chip could be useful. "Representing information with the timing of nerve pulses...that's just not been a thing that digital computers have had a way of dealing with in the past," Mead says.

Others say that in order to break with current computing paradigms, neurochips should learn. "It's definitely an achievement to make a chip of that scale...but I think the claims are a bit stretched because there is no learning happening on chip," says Nayaran Srinivasa, a researcher at HRL Laboratories who's working on similar technologies (also funded by SyNAPSE). "It's not brain-like in a lot of ways." While the implementation does happen on TrueNorth, all the learning happens off-line, on traditional computers. "The von Neumann component is doing all the 'brain' work, so in that sense it's not breaking any paradigm."

To be fair, most learning systems today rely heavily on off-line learning, whether they run on CPUs or faster, more power-hungry GPUs. That's because learning often requires reworking the algorithms and that's much harder to do on hardware because it's not as flexible. Still, IBM says on-chip learning is not something they're ruling out.

Critics say the technology still has very many tests to pass before it can supercharge data centres or power new breeds of intelligent phones, cameras, robots or Google Glass-like contraptions. To think that we're going to have brain-like computer chips in our hands soon would be "misleading," says LeCun, whose lab has worked on neural-net hardware for years. "I'm all in favour of building special-purpose chips for running neural nets.

But I think people should build chips to implement algorithms that we know work at state of the art level," he says. "This avenue of research is not going to pan out for quite a while, if ever. They may get neural net accelerator chips in their smartphones soonish, but these chips won't look at all like the IBM chip. They will look more like modified GPUs."

This article originally appeared on Wired.com

This article was originally published by WIRED UK