Neuromorphic Computing using Superconducting Electronics
Presentation Menu
The human brain is a powerful computing system, exhibiting many desired computing properties, such as fault tolerance, parallelism, and energy efficiency. Plenary speaker Kenneth Segall discusses the most recent and exciting developments in superconducting neuromorphic computing.
The recent rise of artificial neural networks and deep learning, which in turn have led to systems like Alpha Zero and Chat GPT, has been fueled by imitating the information-processing mechanisms in the brain. These advances have come although the aforementioned neural network and AI (Artificial Intelligence) programs typically run on conventional digital hardware, whose architecture and operating principles are fundamentally different than the brain; this has resulted in excess power dissipation and slowdown for these systems.
Over the past decade, significant efforts have been made to produce hardware that works closer to biological neural systems, igniting the field of neuromorphic computing. While this field is still relatively new, neuromorphic hardware has already successfully increased speed and reduced power when running neural networks and AI programs.
Superconducting Electronics are a natural fit for neuromorphic computing. Many basic operations of neuromorphic computing, like spiking and thresholding, are fundamental to the physics of Josephson junctions. Low-loss superconducting transmission lines can carry pulses without distortion like dendrites and axons, and mutually-coupled superconducting loops can perform storing and weighting operations like synapses. Neuromorphic processors do not rely as heavily on dense memory circuits, which is typically a weakness of superconducting digital computing. Recent studies have shown that a superconducting neuromorphic processor would potentially be faster, more energy efficient, and more biologically realistic than any semiconducting neuromorphic hardware available today.
Basic neuron and synaptic circuits with examples of spiking neural network architectures, recent experimental results, and future performance projections are shown in this video. Applications include image and video processing, biological brain simulation, and fast pattern recognition. The presentation will conclude with a possible pathway to human cortex complexity.
Dr. Ken Segall has been working in superconducting electronics since 1995. His areas of research include superconducting detectors, nonlinear dynamics in superconducting circuits, superconducting quantum computing, quantum tunneling in Josephson arrays, synchronization in superconducting networks, numerical simulation of superconducting circuits, and superconducting artificial neurons and synapses. Dr. Segall received his Ph.D. from Yale University in 1999 in the department of Applied Physics, where he won the Harding Bliss Award for Excellence in Applied Physics and Engineering. After some postdoctoral work at Yale, he was a postdoctoral associate at M.I.T. in the Electrical Engineering Department for three years. In 2003, he started as an Assistant Professor at Colgate University in the Physics and Astronomy department, where he has been ever since. He was promoted to Associate Professor in 2009 and Full Professor in 2017. He served as department chair in 2013 and from 2014 to 2017. His teaching interests include Thermodynamics, Nonlinear Dynamics and Chaos, Solid State Physics, Electricity and Magnetism, Mechanics, Quantum Mechanics, Math Methods of Physics, and Sports Statistics and Analytics. He lives in Central New York with his wife and two children.