Although it is not attracting as much attention as quantum computing, neuromorphic computing is also a discipline with enormous potential as classical computation complement with which we are all familiar. In fact, not only are some of the most reputable research centers in the world, such as MIT, contributing to its development; Intel, IBM and HP are three of the companies that are bidding hardest for it.
What neuromorphic computing proposes is to mimic the behavior of the animal nervous system in general, and that of the brain in particular. The starting point described at the time by Carver Mead, who was the American electrical engineer who proposed this idea in the 1960s, consisted of approaching transistors as switching devices. analog natureand not as digital switches.
This approach seemed appropriate because the behavior of transistors resembles the way neurons communicate with each other. by electrical impulses (this mechanism is known as neuronal synapses). Mead’s idea is original, and, above all, very attractive, but putting it into practice requires approaching it from a multidisciplinary perspective in which physics, biology, mathematics, computer science and microelectronics are forced to collaborate.
In any case, the ultimate purpose of this discipline, which has undergone remarkable development over the last decade and a half, is to develop electronic systems that are capable of processing information in a more efficient way. In fact, they aspire to be as efficient as an organic brain, a very ambitious and interesting goal, but also very difficult to achieve.
An organic brain is capable of carrying out a lot of work with very little energy, and furthermore, the way it processes information makes it very clever when faced with some problems, but also very inefficient when faced with others. This explains why a neuromorphic processor can solve some problems in less time and investing less energy than a classic computer, but in others it can be much more inefficient than the latter.
A neuromorphic system can be up to sixteen times more efficient
We have talked about the Intel Loihi neuromorphic chip several times in Engadget. It is manufactured with 14nm photolithography and incorporates 128 cores and a little more than 130,000 artificial neurons. According to Intel, it has been designed for research projects and has capabilities similar to those of a tiny brain.
These specifications are quite surprising, but what is most striking is that each of these artificial neurons can communicate with thousands of the neurons with which it lives, creating an intricate network that emulates the neural networks of our own brain. Here it is precisely where the power of Loihi resides.
The Kapoho Bay neuromorphic system contains two Loihi chips with 262,000 neurons that allow it, according to Intel, to identify gestures in real time and read braille.
Taking this chip as a starting point, Intel has developed more complex neuromorphic systems that combine several Loihi units to adapt to significantly higher workloads and more demanding processes. The simplest of these systems is Kapoho Bay, and it contains two Loihi chips with 262,000 neurons that allow it, according to Intel, to identify gestures in real time and read braille, among other processes.
Some of the problems that neuromorphic systems are good at are pattern identification, machine learning, selecting the optimal solution from a wide range of options, and requirement satisfaction algorithms. Until now, researchers had tested how effectively neuromorphic chips and algorithms deal with these problems, but it was not clear that they were noticeably more efficient from a strictly energetic point of view.
This has changed. Just a few days ago, several researchers from Intel and the Institute for Theoretical Informatics of the Graz University of Technology in Austria published an article in the journal Nature Machine Intelligence in which they claim to have experimentally verified that a Nahuku board made up of 32 Loihi chips is up to sixteen times more efficient than a hardware infrastructure with comparable power, but integrated by graphics processors similar to those we can find inside our computers.
GPUs give us higher performance than general purpose processors when both are faced with the execution of an artificial intelligence algorithm because its architecture prioritizes parallelism. The problem is that the energy consumption of a graphics processor farm can be very high, and at this juncture the possibility of meeting this challenge while consuming up to sixteen times less energy is very attractive. This is what, according to Intel, the neuromorphic systems it is working on already offer us, and it seems to us a compelling reason to keep track of them very closely.
More information: Nature Machine Intelligence