Training a Probabilistic Graphical Model with Resistive Switching Electronic Synapses
IEEE Transactions on Electron Devices, 63(12)
Powered by the California Digital Library
University of California
Index Terms—neuromorphic computing, phase change
learning can extract complex and useful structures
significant amounts of manual feature engineering . It has
This work is supported in part by SONIC, one of six centers of STARnet, a
DARPA, the NSF Expedition on Computing (Visual Cortex on Silicon, award
1317470), and the member companies of the Stanford Non-Volatile Memory
Technology Research Initiative (NMTRI) and the Stanford SystemX Alliance.
S.B. Eryilmaz and H.-S.P. Wong are with the Electrical Engineering
Department, Stanford University, Stanford, CA 94305 USA (e-mail:
E. Neftci is with the Department of Cognitive Sciences, UC Irvine, Irvine,
CA 92697 USA (e-mail: firstname.lastname@example.org).
S. Joshi is with the Department of Electrical and Computer Engineering,
UC San Diego, San Diego, CA 92093 USA (e-mail: email@example.com).
S. Kim, M. BrightSky, C. Lam are with the IBM Research, Yorktown
H.-L. Lung is with the Macronix International Co., Ltd., Emerging Central
Lab, Taiwan (e-mail: Sllung@mxic.com.tw).
G. Cauwenberghs is with the Department of Bioengineering, UC San
Diego, San Diego, CA 92093 USA (e-mail: firstname.lastname@example.org).
made significant advances in recent years and is shown to
outperform many other machine learning techniques for a
variety of tasks such as image recognition, speech recognition,
natural language understanding, predicting the effects of
mutations in DNA, and reconstructing brain circuits .
However, training of large scale deep networks (~10
synapses, compared to ~10
synapses in human brain) in
today’s hardware consumes more than 10 gigajoules
(estimated) of energy [3-4]. An important origin of this energy
consumption is the physical separation of processing and
memory, which is exacerbated by the large amounts of data
needed for training deep networks [1-5]. It has been reported
that ~40 percent of energy consumed in general purpose
computers are due to the off-chip memory hierarchy , and
this fraction will increase when applications are more data-
centric . GPUs do not solve this problem, since up to 50
percent of dynamic power and 30 percent of overall power are
consumed by off-chip memory as shown in several
benchmarks . On-chip SRAM does not solve the problem
either, since it is very area inefficient (> 100 F
, F being the
and cannot scale up with system size.
Extracting useful information from data, which requires
efficient data mining and (deep) learning algorithms, is
becoming increasingly common in consumer products such as
smartphones, and is expected to be even more important for
the internet-of-things (IoT) ; where energy efficiency is
especially crucial. To scale up these systems in an energy
efficient manner, it is necessary to develop new learning
algorithms and hardware architectures that can capitalize on
fine-grained on-chip integration of memory with computation.
Because the number of synapses in a neural network far
exceeds the number of neurons, we must pay special attention
to the power, device density, and wiring of the electronic
synapses, for scaled-up systems that solve practical problems.
Today, synaptic weights in both conventional processors and
neuromorphic processors [10-12] are currently implemented in
SRAM and/or DRAM. Due to processing limitations, DRAM
needs to be on a separate chip or connected by chip stacking
using through-silicon-via (TSV) [13-15] that has limited via
density. This results in increased power consumption and
limited bandwidth for memory accesses. SRAM, on the other
hand, occupies too much area (>100 F
per bit, 58,800 nm
amount of local memory that can be accessed efficiently .
“New” non-volatile resistive memory elements  such as
Training a Probabilistic Graphical Model with
Resistive Switching Electronic Synapses
S. Burc Eryilmaz*, Student Member, IEEE, Emre Neftci, Siddharth Joshi, Student Member, IEEE,
SangBum Kim, Member, IEEE, Matthew BrightSky,
Chung Lam, Gert
Cauwenberghs, Fellow, IEEE, H.-S. Philip Wong, Fellow, IEEE