Signal Processing and Neuromorphic Computing
Work on neuromorphic computing has proceeded, often in parallel, by researchers in machine learning, computational neuroscience, and hardware design. While the problems under study -- regression, classification, control, and learning -- are central to signal processing, the signal processing community has been by and large not involved in the definition of this emerging field. Nevertheless, with the increasing availability of neuromorphic chips and platforms, progress in the field of neuromorphic computing calls for an inter-disciplinary effort by researchers in signal processing in concert with researchers in machine learning, hardware design, system design, and computational neuroscience.
From a signal processing perspective, the specific features and constraints of neuromorphic computing platforms open interesting new problems concerning regression, classification, control, and learning. In particular, Spiking Neural Networks (SNNs) consist of asynchronous distributed architectures that process sparse binary time series by means of local spike-driven computations, local or global feedback, and online learning. Ideally, they are characterized by a graceful degradation in performance as the number of spikes, and hence the energy usage, of the network increases. As an example, recent work has shown that SNNs can obtain satisfactory solutions of the sparse regression (LASSO) problem much more quickly than conventional iterative algorithms. Solutions leverage tools that are well-known to signal processing researchers, such as variational inference, nonlinear systems, and stochastic gradient descent.
IEEE Signal Processing Magazine Special Issue
The scope of the field, encompassing neuroscience, hardware design, and machine learning, makes it difficult for a non-expert to find a suitable entry point in the literature. It is the goal of the just published special issue on the IEEE Signal Processing Magazine to bring together key researchers in this area, with the aim of providing up-to-date and survey-style papers on algorithmic, hardware, and neuroscience perspectives on the state-of-the-art of this emerging field. The special issue, co-edited by Osvaldo Simeone, Bipin Rajendran, Andre Gruning, Evangelos Eleftheriou, Mike Davies, Sophie Deneve, and Guang-Bin Huang, is organized as follows.
The special issue starts with a contribution by Yulia Sandamirskaya and Giacomo Indiveri, “The importance of space and time for signal processing in neuromorphic agents”, that introduces the role of time-encoded information and of parallel neuromorphic computing architectures in enabling more efficient learning agents as compared to state-of-the-art ANNs.
Sensing and Time-Encoded Representations
Neuromorphic computing architectures take as input time-encoded, i.e., spiking, signals. These can be either produced by neuromorphic sensors or through conversion of natural signals such as images, video, or audio. The next two papers of this special issue describe the two scenarios. In “Event-Driven Sensing for Efficient Perception” by Shih-Chii Liu, Bodo Rueckauer, Enea Ceolini, Adrian Huber, and Tobi Delbruck, the authors discuss the main properties of the data produced by neuromorphic sensors, and show how these features enable energy-efficient, low-latency and real-time computing on neuromorphic platforms. The following paper, “Signal Processing Foundations for Time-based Signal Representations” by Noyan C. Sevuktekin, Lav R. Varshney, Pavan K. Hanumolu, and Andrew C. Singer, discusses signal processing foundations for time-based signal representations of exogenous signals and for the reconstruction of these signals from their time-encoded versions.
Learning and signal processing applications
Neuromorphic platforms can be trained to carry out a variety of inference and control tasks. The next set of papers review training algorithms and applications. In “Surrogate Gradient Learning in Spiking Neural Networks”, Emre O. Neftci, Hesham Mostafa, and Friedemann Zenke provide a review of training algorithms for standard deterministic models of SNNs via surrogate gradient methods, which aim at overcoming the non-differentiability of the relevant loss functions. As an alternative solution, the next paper “An Introduction to Probabilistic Spiking Neural Networks” by Hyeryung Jang, Osvaldo Simeone, Brian Gardner, and Andre Gruning, discusses the use of probabilistic models and reviews the resulting learning rules and applications. In order to further reduce the complexity of training, reservoir computing techniques have been proposed that are based on adapting only a subset of weights, while others are randomly selected. Nicholas Soures and Dhireesha Kudithipudi next present an overview of the resulting “Spiking Reservoir Networks”. Finally, Cengiz Pehlevan and Dmitri B. Chklovskii in “Neuroscience-inspired unsupervised learning algorithms for processing streaming data” focus on the special class of unsupervised learning algorithms, for which they provide a principled derivation of similarity-based local learning rules that are applied to problems such as linear dimensionality reduction, sparse and/or nonnegative feature extraction, and blind nonnegative source separation.
Standard computing systems based on the von Neumann architecture are not well suited to harness the efficiency of computing in SNNs. In “Low-Power Neuromorphic Hardware for Signal Processing Applications”, Bipin Rajendran, Abu Sebastian, Michael Schmuker, Narayan Srinivasa, Evangelos Eleftheriou review architectural and system level design aspects underlying the operation of neuromorphic computing platforms for the efficient implementation of SNNs.