Neuromorphic computing, commonly understood as a computing approach built upon neurons, synapses, and their dynamics, as opposed to Boolean gates, is gaining large mindshare due to its direct application in solving current and future computing technological problems, such as smart sensing, smart devices, self-hosted and self-contained devices, artificial intelligence (AI) applications, etc. In a largely software-defined implementation of neuromorphic computing, it is possible to throw enormous computational power or optimize models and networks depending on the specific nature of the computational tasks. However, a hardware-based approach needs the identification of well-suited neuronal and synaptic models to obtain high functional and energy efficiency, which is a prime concern in size, weight, and power (SWaP) constrained environments. In this work, we perform a study on the characteristics of hardware neuron models (namely, inference errors, generalizability and robustness, practical implementability, and memory capacity) that have been proposed and demonstrated using a plethora of emerging nano-materials technology-based physical devices, to quantify the performance of such neurons on certain classes of problems that are of great importance in real-time signal processing like tasks in the context of reservoir computing. We find that the answer on which neuron to use for what applications depends on the particulars of the application requirements and constraints themselves, i.e., we need not only a hammer but all sorts of tools in our tool chest for high efficiency and quality neuromorphic computing.
Emerging two-terminal nanoscale memory devices, known as memristors, have demonstrated great potential for implementing energy-efficient neuro-inspired computing architectures over the past decade. As a result, a wide range of technologies have been developed that, in turn, are described via distinct empirical models. This diversity of technologies requires the establishment of versatile tools that can enable designers to translate memristors’ attributes in novel neuro-inspired topologies. In this study, we present NeuroPack, a modular, algorithm-level Python-based simulation platform that can support studies of memristor neuro-inspired architectures for performing online learning or offline classification. The NeuroPack environment is designed with versatility being central, allowing the user to choose from a variety of neuron models, learning rules, and memristor models. Its hierarchical structure empowers NeuroPack to predict any memristor state changes and the corresponding neural network behavior across a variety of design decisions and user parameter options. The use of NeuroPack is demonstrated herein via an application example of performing handwritten digit classification with the MNIST dataset and an existing empirical model for metal-oxide memristors.
Resistive random-access memories, also known as memristors, whose resistance can be modulated by the electrically driven formation and disruption of conductive filaments within an insulator, are promising candidates for neuromorphic applications due to their scalability, low-power operation and diverse functional behaviors. However, understanding the dynamics of individual filaments, and the surrounding material, is challenging, owing to the typically very large cross-sectional areas of test devices relative to the nanometer scale of individual filaments. In the present work, conductive atomic force microscopy is used to study the evolution of conductivity at the nanoscale in a fully CMOS-compatible silicon suboxide thin film. Distinct filamentary plasticity and background conductivity enhancement are reported, suggesting that device behavior might be best described by composite core (filament) and shell (background conductivity) dynamics. Furthermore, constant current measurements demonstrate an interplay between filament formation and rupture, resulting in current-controlled voltage spiking in nanoscale regions, with an estimated optimal energy consumption of 25 attojoules per spike. This is very promising for extremely low-power neuromorphic computation and suggests that the dynamic behavior observed in larger devices should persist and improve as dimensions are scaled down.