Techno Blender
Digitally Yours.

Microchips that mimic the human brain could make AI far more energy efficient | Science

0 73


Artificial intelligence (AI) makes video games more realistic and helps your phone recognize your voice—but the power-hungry programs slurp up energy big time. However, the next generation of AI may be 1000 times more energy efficient, thanks to computer chips that work like the human brain. A new study shows such neuromorphic chips can run AI algorithms using just a fraction of the energy consumed by ordinary chips.

“This is an impressive piece of work,” says Steve Furber, a computer scientist at the University of Manchester. Such advances, he says, could lead to huge leaps in performance in complex software that, say, translates languages or pilots driverless cars.

An AI program generally excels at finding certain desired patterns in a data set, and one of the most complicated things it does is keep bits of the pattern straight as it pieces together the whole thing. Consider how a computer might recognize an image. First, it spots the well-defined edges of that image. Then, it must remember these edges—and all subsequent parts of the image—as it forms the final picture.

A common component of such networks is a software unit called long short-term memory (LSTM), which maintains a memory of one element as things change over time. A vertical edge in an image, for example, needs to be retained in memory as the software determines whether it represents a part of the numeral “4” or the door of a car. Typical AI systems must keep track of hundreds of LSTM elements at once.

Current networks of LSTMs operating on conventional computer chips are highly accurate. But the chips are power hungry. To process bits of information, they must first retrieve individual bits of stored data, manipulate them, and then send them back to storage. And then repeat that sequence over and over and over.

Intel, IBM, and other chipmakers have been experimenting with an alternative chip design, called neuromorphic chips. These process information like a network of neurons in the brain, in which each neuron receives inputs from others in the network and fires if the total input exceeds a threshold. The new chips are designed to have the hardware equivalent of neurons linked together in a network. AI programs also rely on networks of faux neurons, but in conventional computers, these neurons are defined entirely in software and therefore reside, virtually, in the computer’s separate memory chips.

The setup in a neuromorphic chip handles memory and computation together, making it much more energy efficient: Our brains only require 20 watts of power, about the same as an energy-efficient light bulb. But to make use of this architecture, computer scientists need to reinvent how they carry out functions such as LSTM.

That was the task that Wolfgang Maass, a computer scientist at the Graz University of Technology, took on. He and his colleagues sought to replicate a memory storage mechanism in our brains that biological neural networks perform called after-hyperpolarizing (AHP) currents. After a neuron in the brain fires, it typically returns to its baseline level and remains quiescent until it once again receives enough input to exceed its threshold. But in AHP networks, after firing once, a neuron is temporarily inhibited from firing again, a dead period that actually helps the network of neurons retain information while expending less energy.

Maass and his colleagues integrated an AHP neuron firing pattern into their neuromorphic neural network software and ran their network through two standard AI tests. The first challenge was to recognize a handwritten “3” in an image broken into hundreds of individual pixels. Here, they found that when run on one of Intel’s neuromorphic Loihi chips, their algorithm was up to 1000 times more energy efficient than LSTM-based image recognition algorithms run on conventional chips.

For their second test, in which the computer needed to answer questions about the meaning of stories up to 20 sentences long, the neuromorphic setup was as much as 16 times as efficient as algorithms run on conventional computer processors, the authors report this week in Nature Machine Intelligence.

Maass notes that this second test was done on a series of 22 of Intel’s first-generation Loihi chips, which consume relatively large amounts of energy in communicating with each other. The company has since come out with a second-generation Loihi chip, each with more neurons, which he says should reduce the need for chip-to-chip communication and thus make the software run more efficiently.

For now, few neuromorphic chips are commercially available. So, wide-scale applications likely won’t emerge quickly. But advanced AI algorithms, such as the ones Maass has demonstrated, could help these chips gain a commercial foothold, says Anton Arkhipov, a computational neuroscientist at the Allen Institute. “At the very least, that would help speed up AI systems.”

That, in turn, could lead to novel applications, such as AI digital assistants that could not only prompt someone with the name of a person in a photo, but also remind them where they met and relate stories of their past together. By incorporating other neuronal firing patterns in the brain, Mass says future neuromorphic setups may even one day begin to explore how the multitude of neuronal firing patterns work together to produce consciousness.


Artificial intelligence (AI) makes video games more realistic and helps your phone recognize your voice—but the power-hungry programs slurp up energy big time. However, the next generation of AI may be 1000 times more energy efficient, thanks to computer chips that work like the human brain. A new study shows such neuromorphic chips can run AI algorithms using just a fraction of the energy consumed by ordinary chips.

“This is an impressive piece of work,” says Steve Furber, a computer scientist at the University of Manchester. Such advances, he says, could lead to huge leaps in performance in complex software that, say, translates languages or pilots driverless cars.

An AI program generally excels at finding certain desired patterns in a data set, and one of the most complicated things it does is keep bits of the pattern straight as it pieces together the whole thing. Consider how a computer might recognize an image. First, it spots the well-defined edges of that image. Then, it must remember these edges—and all subsequent parts of the image—as it forms the final picture.

A common component of such networks is a software unit called long short-term memory (LSTM), which maintains a memory of one element as things change over time. A vertical edge in an image, for example, needs to be retained in memory as the software determines whether it represents a part of the numeral “4” or the door of a car. Typical AI systems must keep track of hundreds of LSTM elements at once.

Current networks of LSTMs operating on conventional computer chips are highly accurate. But the chips are power hungry. To process bits of information, they must first retrieve individual bits of stored data, manipulate them, and then send them back to storage. And then repeat that sequence over and over and over.

Intel, IBM, and other chipmakers have been experimenting with an alternative chip design, called neuromorphic chips. These process information like a network of neurons in the brain, in which each neuron receives inputs from others in the network and fires if the total input exceeds a threshold. The new chips are designed to have the hardware equivalent of neurons linked together in a network. AI programs also rely on networks of faux neurons, but in conventional computers, these neurons are defined entirely in software and therefore reside, virtually, in the computer’s separate memory chips.

The setup in a neuromorphic chip handles memory and computation together, making it much more energy efficient: Our brains only require 20 watts of power, about the same as an energy-efficient light bulb. But to make use of this architecture, computer scientists need to reinvent how they carry out functions such as LSTM.

That was the task that Wolfgang Maass, a computer scientist at the Graz University of Technology, took on. He and his colleagues sought to replicate a memory storage mechanism in our brains that biological neural networks perform called after-hyperpolarizing (AHP) currents. After a neuron in the brain fires, it typically returns to its baseline level and remains quiescent until it once again receives enough input to exceed its threshold. But in AHP networks, after firing once, a neuron is temporarily inhibited from firing again, a dead period that actually helps the network of neurons retain information while expending less energy.

Maass and his colleagues integrated an AHP neuron firing pattern into their neuromorphic neural network software and ran their network through two standard AI tests. The first challenge was to recognize a handwritten “3” in an image broken into hundreds of individual pixels. Here, they found that when run on one of Intel’s neuromorphic Loihi chips, their algorithm was up to 1000 times more energy efficient than LSTM-based image recognition algorithms run on conventional chips.

For their second test, in which the computer needed to answer questions about the meaning of stories up to 20 sentences long, the neuromorphic setup was as much as 16 times as efficient as algorithms run on conventional computer processors, the authors report this week in Nature Machine Intelligence.

Maass notes that this second test was done on a series of 22 of Intel’s first-generation Loihi chips, which consume relatively large amounts of energy in communicating with each other. The company has since come out with a second-generation Loihi chip, each with more neurons, which he says should reduce the need for chip-to-chip communication and thus make the software run more efficiently.

For now, few neuromorphic chips are commercially available. So, wide-scale applications likely won’t emerge quickly. But advanced AI algorithms, such as the ones Maass has demonstrated, could help these chips gain a commercial foothold, says Anton Arkhipov, a computational neuroscientist at the Allen Institute. “At the very least, that would help speed up AI systems.”

That, in turn, could lead to novel applications, such as AI digital assistants that could not only prompt someone with the name of a person in a photo, but also remind them where they met and relate stories of their past together. By incorporating other neuronal firing patterns in the brain, Mass says future neuromorphic setups may even one day begin to explore how the multitude of neuronal firing patterns work together to produce consciousness.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment