Lundi 30 Septembre 2024
taille du texte
   
Jeudi, 18 Août 2011 21:20

New Chip Borrows Brain's Computing Tricks

Rate this item
(0 Votes)

New Chip Borrows Brain's Computing Tricks

IBM has unveiled an experimental chip that borrows tricks from brains to power a cognitive computer, a machine able to learn from and adapt to its environment.

Reactions to the computer giant’s press release about SyNAPSE, short for Systems of Neuromorphic Adaptive Plastic Scalable Electronic, have ranged from conservative to zany. Some even claim it’s IBM’s attempt to recreate a cat brain from silicon.

“Each neuron in the brain is a processor and memory, and part of a social network, but that’s where the brain analogy ends. We’re not trying to simulate a brain,” said IBM spokeswoman Kelly Sims. “We’re looking to the brain to develop a system that can learn and make sense of environments on the fly.”

The human brain is a vast network of roughly 100 billion neurons sharing 100 trillion connections, called synapses. That complexity makes for more mysteries than answers — how consciousness arises, how memories are stored and why we sleep are all outstanding questions. But researchers have learned a lot about how neurons and their connections underpin the power, efficiency and adaptability of the brain.

To get a better understanding of SyNAPSE and how it borrows from organic neural networks, Wired.com spoke with project leader Dharmendra Modha of IBM Research.

Wired.com: Why do we want computers to learn and work like brains?

Dharmendra Modha: We see an increasing need for computers to be adaptable, to develop functionality today’s computers can’t. Today’s computers can carry out fast calculations. They’re left-brain computers, and are ill-suited for right-brain computation, like recognizing danger, the faces of friends and so on, that our brains do so effortlessly.

The analogy I like to use: You wouldn’t drive a car without half a brain, yet we have been using only one type of computer. It’s like we’re adding another member to the family.

Wired.com: So, you don’t view SyNAPSE as a replacement for modern computers?

New Chip Borrows Brain's Computing TricksModha: I see each system as as complementary. Modern computers are good at some things — they have been with us since ENIAC, and I think they will be with us for perpetuity — but they aren’t well-suited for learning.

A modern computer, in its elementary form, is a block of memory and a processor separated by a bus, a communication pathway. If you want to create brain-like computation, you need to emulate the states of neurons, synapses, and the interconnections amongst neurons in the memory, the axons. You have to fetch neural states from the memory, send them to the processor across the bus, update them, send them back and store them in the memory. It’s a cycle of store, fetch, update, store … and on and on.

To deliver real-time and useful performance, you have to run this cycle very, very fast. And that leads to ever-increasing clock rates. ENIAC’s was about 100 KHz. In 1978 they were 4.7 MHz. Today’s processors are about 5 GHz. If you want faster and faster clock rates, you achieve that by building smaller and smaller devices.

Wired.com: And that’s where we run into trouble, right?

Modha: Exactly. There are two fundamental problems with this trajectory. The first is that, very soon, we will hit hard physical limits. Mother nature will stop us. Memory is the next problem. As you shorten the distance between small elements, you leak current at exponentially higher rates. At some point the system isn’t useful.

So we’re saying, let’s go back a few million years instead of ENIAC. Neurons are about 10 Hz, on average. The brain doesn’t have ever-increasing clock rates. It’s a social network of neurons.

Wired.com: What do you mean by a social network?

Modha: The links between the neurons are synapses, and that’s the important thing — how is your network wired? Who are your friends, and how close are they? You can think of the brain as a massively, massively parallel distributed computation system.

Suppose that you would like to map this computation onto one of today’s computers. They’re ill-suited for this and inefficient, so we’re looking to the brain for a different approach. Let’s build something that looks like that, on a basic level, and see how well that performs. Build a massively, massively, massively parallel distributed substrate. And that means, like in the brain, bringing your memory extremely close to a processor.

It’s like an orange farm in Florida. The trees are the memory, and the oranges are bits. Each of us, we’re the neurons who consume and process them. Now, you could be collecting them and transporting them over long distances, but imagine having your own small, private orange grove. Now you don’t have to move that data over long distances to get it. And your neighbors are nearby with their orange trees. The whole paradigm is a huge sea of synapse-like memory elements. It’s an invisible layer of processing.

Wired.com: In the brain, neural connections are plastic. They change with experience. How can something hard-wired do this?

Modha: The memory holds the synapse-like state, and it can be adapted in real-time to encode correlations, associations and causality or anti-causality. There’s a saying out there, “neurons that fire together, wire together.” The firing of neurons can strengthen or weaken synapses locally. That’s how learning is affected.

Wired.com: So let’s suppose we have a scaled-up learning computer. How do you coax it do something useful for you?

Modha: This is a platform of technology that is adaptable in ubiquitous, changing environments. Like the brain, there is almost a limitless array of applications. The brain can take information from sight, touch, sound, smell and other senses and integrate them into modalities. By modalities I mean events like speech, walking and so on.

Those modalities, the entire computation, goes back to neural connections. Their strength, their location, who is and who is not talking to whom. It is possible to reconfigure some parts of this network for different purposes. Some things are universal to all organisms with a brain — the presence of an edge, textures, colors. Even learn before you’re born, you can recognize them. They’re natural.

Knowing your mother’s face, through nurture, comes later. Imagine a hierarchy of programming techniques, a social network of chip neurons that talk and can be adapted and reconfigured to carry out tasks you desire. That’s where we’d like to end up with this.

New Chip Borrows Brain's Computing Tricks

Images: 1) The SyNAPSE cognitive computer chip. The central brown core “is where the action happens,” Modha said. IBM would not release detailed diagrams because the $21 million technology is still in an experimental phase and funded by DARPA. (IBM Research – Zurich/Flickr) 2) Dharmendra Modha in front of a “brain wall.” (IBM Research – Zurich/Flickr) 3) DARPA

See Also:

Authors:

French (Fr)English (United Kingdom)

Parmi nos clients

mobileporn