Memristive devices that mimic neuron-connecting synapses could serve as the hardware for neural networks that copy the way the brain learns. Now two new studies may help solve key problems these components face not just with yields and reliability, but with finding applications beyond neural nets.
Memristors, or memory resistors, are essentially switches that can remember which electric state they were toggled to after their power is turned off. Scientists worldwide aim to use memristors and similar components to build electronics that, like neurons, can both compute and store data. These memristive devices may greatly reduce the energy and time lost in conventional microchips shuttling data back and forth between processors and memory. Such brain-inspired neuromorphic hardware may also prove ideal for implementing neural networks—AI systems increasingly finding use in applications such as analyzing medical scans and empowering autonomous vehicles.
However, current memristive devices typically rely on emerging technologies with low production yields and unreliable electronic performance. To help overcome these challenges, researchers in Israel and China fabricated memristive devices using a standard CMOS production line. The resulting silicon synapses the team built boasted a 100 percent yield with 170- to 350-fold greater energy efficiency than a high-performance Nvidia Tesla V100 graphics processing unit when it came to multiply-accumulate operations, the most basic operation in neural networks.
“Memristors are a highly promising lead to provide low-energy consumption artificial intelligence.”
—Damien Querlioz, Université Paris-Saclay
The scientists developed the new devices using the floating-gate transistor technology used in commercial flash memory. Whereas conventional floating-gate transistors have three terminals, the new components only have two. This greatly simplified fabrication and operation and reduced their size. In addition, memristors only have binary inputs and outputs, eliminating the need for the large, energy-hungry, analog-to-digital and digital-to-analog converters often used in neuromorphic hardware, says study senior author Shahar Kvatinsky, an associate professor of electrical and computer engineering at the Technion−Israel Institute of Technology in Haifa.
The new devices displayed high endurance, operating past more than 100,000 cycles of programming and erasing using voltage pulses. In addition, they showed only moderate device-to-device variation and are projected to possess long data retention times of more than 10 years.
The researchers use an array of about 150 of these components to implement a kind of neural network that operated using only binary signals. In experiments, it could recognize handwritten digits with roughly 97 percent accuracy. Says Kvatinsky, this work “is just a start—a proof-of-concept and not a whole integrated chip or a large neural network. Integration and scaling up is a major challenge.”
In another study, a team of French researchers investigated using memristors for the statistical computing technique known as Bayesian reasoning, in which prior knowledge helps compute the chances that an uncertain choice might be correct. Its results are fully explainable—unlike many nearly inscrutable AI computations—and it can perform well when there is little available data, as it can incorporate prior expert knowledge. However, “it is just not obvious how to compute Bayesian reasoning with memristors,” says study co-author Damien Querlioz, research scientist at CNRS, Université Paris-Saclay.
Implementing Bayesian reasoning using conventional electronics requires complex memory patterns, “which increase exponentially with the number of observations,” says neuromorphic scientist Melika Payvand at the Institute of Neuroinformatics in Zurich, who did not participate in either study. However, Querlioz and his colleagues “found a way of simplifying this,” she notes.
Memristor AI “excels in safety-critical situations, where high uncertainty is present, little data is available, and explainable decisions are required.”
—Damien Querlioz, Université Paris-Saclay
The scientists rewrote Bayesian equations so a memristor array could perform statistical analyses that harnesses randomness—a.k.a. “stochastic computing.” Using this approach, the array generated streams of semi-random bits at each tick of the clock. These bits were often zeroes but were sometimes ones. The proportion of zeroes to ones encoded the probabilities needed for the statistical calculations the array performed. This digital strategy uses relatively simple circuitry compared to non-stochastic methods, all of which reduces the system’s size and energy demands.
The researchers fabricated a prototype circuit incorporating 2,048 hafnium oxide memristors on top of 30,080 CMOS transistors on the same chip. In experiments, they had the new circuit recognize a person’s handwritten signature from signals beamed from a device worn on the wrist.
Bayesian reasoning is often thought of as computationally expensive with conventional electronics. The new circuit performed handwriting recognition using 1/800th to 1/5,000th the energy of a conventional computer processor, suggesting “that memristors are a highly promising lead to provide low-energy consumption artificial intelligence,” Querlioz says.
The new device also can instantly flick on and off, suggesting it can work only when needed to conserve power. Moreover, it is also resilient to errors from random events, making it useful in extreme environments, the researchers say. All in all, the new circuit “excels in safety-critical situations, where high uncertainty is present, little data is available, and explainable decisions are required,” Querlioz says. “Examples are medical sensors, or circuits for monitoring the safety of industrial facilities.”
A future direction for the Bayesian circuits might include machines that collect multiple types of sensory data, Payvand says, which might include autonomous vehicles or drones. If the machine is not confident about predictions made based off one sense, it could boost its confidence by analyzing data from a different sense, she notes.
A key obstacle Bayesian systems face “is their scalability to larger problems or networks,” Payvand cautions. Querlioz notes they have designed a considerably scaled-up version of their device “that is currently being fabricated.” He notes their circuit is currently specialized for certain types of Bayesian computation, and want to create more adaptable designs in the future.
To a certain degree, both studies use randomness—Querlioz and his colleagues use it for statistical analysis, while Kvatinsky and his collaborators have their neural networks sample data at random intervals to avoid the kinds of errors that can occur when sampling data a limited number of times.
“These approaches pair very well with the inherited randomness of memristor devices,” says Giacomo Pedretti, a senior AI research scientist at Hewlett Packard Labs, who did not take part in either work. It would be “very interesting” to try to use the inherent noise in these electronics to generate controlled randomness “rather than implementing costly digital pseudorandom number generators,” he says.
Both studies appeared 19 December in the journal Nature Electronics.
Source: IEEE Spectrum Computing