- ni2o, Washington, DC, United States
Neuromorphic computing technologies are about to change modern computing, yet most work thus far has emphasized hardware development. This review focuses on the latest progress in algorithmic advances specifically for potential use in brain implants. We discuss current algorithms and emerging neurocomputational models that, when implemented on neuromorphic hardware, could match or surpass traditional methods in efficiency. Our aim is to inspire the creation and deployment of models that not only enhance computational performance for implants but also serve broader fields like medical diagnostics and robotics inspiring next generations of neural implants.
1 Introduction
Neuromorphic computing is an interdisciplinary area that takes inspiration from biological neural systems to develop computing architectures and hardware. It emphasizes massive parallelism, energy efficiency, adaptability, and co-located memory and processing—contrasting with traditional von Neumann designs (Kudithipudi et al., 2025; Schuman et al., 2017). Neuromorphic Algorithms often incorporate phenomena like neuron spiking, synaptic plasticity, and network-level dynamics, aiming to replicate the brain’s style of processing. While some run on standard CPUs or GPUs, specialized neuromorphic chips—analog, digital, or mixed-signal—offer advantages for pattern recognition, sensory data analysis, and real-time learning by minimizing data-transfer bottlenecks (Kudithipudi et al., 2025; Schuman et al., 2022).
Implementing neuromorphic algorithms in hardware involves several steps:
1. Algorithm design: Formulating the mathematical models (e.g., spiking neurons, plasticity rules).
2. Hardware architecture: Selecting how neurons and synapses are represented, such as using analog or digital circuits
3. Hardware description: Employing design languages (e.g., Verilog) to produce implementable circuit specifications, though analog approaches may demand other tools.
4. Chip fabrication: Physically manufacturing the design, often in CMOS or emerging technologies like memristors.
Brain implants, meanwhile, are medical devices that interface directly with the brain’s bioelectrical environment. Currently, they are used for treating neurological disorders (Benabid, 2003), sensory prosthetics (Wilson and Dorman, 2008), motor prosthetics (Gupta et al., 2023), or mental health treatment (Reardon, 2017). Emerging and future applications include cognitive enhancement, brain-to-brain communication, neural rehabilitation such as rewiring neural pathways to restore function after stroke or brain injury (Baker et al., 2023; Chiappalone et al., 2022; Donati and Valle, 2024; Rao et al., 2014; Vakilipour and Fekrvand, 2024).
Integrating neuromorphic computing with these implants could enable more adaptive, low-latency control of neural signals. For example, a neuromorphic chip in an implant might adjust stimulation patterns in real time as brain states fluctuate, minimizing power consumption and improving clinical outcomes (Chiappalone et al., 2022; Donati and Valle, 2024). In this paper, we review neuromorphic models as well as neurocomputational models—ranging from single-neuron abstractions to large-scale network simulations—that appear well-suited for neuromorphic hardware. We also discuss how these models might address the unique demands of brain implants, such as biocompatibility, power constraints, and real-time adaptability in a living neural environment (Miziev et al., 2024).
2 The foundations and evolution of neuromorphic computing
From its beginnings, neuromorphic computing has sought to replicate the brain’s approach to handling information. In 1949, Donald Hebb proposed synaptic plasticity as a mechanism for learning and memory (Hebb, 1988) laying a conceptual foundation for brain-inspired hardware. Carver Mead took these ideas forward in the late 1980s, pioneering analog very-large-scale integration (VLSI) chips such as artificial retinas and cochleas that circumvented the “von Neumann bottleneck” by combining memory and processing in one place as opposed to traditional separation of CPUs and memory units (Mead and Conway, 1978; Mead, 1989).
Although Mead was not alone in brain-inspired computing, his mixed-signal designs revolutionized the field by reducing latency and allowing data storage and processing to occur simultaneously across the network. Unlike conventional computing, where explicit instruction sequences guide operation, neuromorphic systems derive their “program” directly from the network’s structure (Mead and Conway, 1978; Mead and Ismail, 1989; Schuman et al., 2017). Over time, the term “neuromorphic computing” has broadened covering a wider range of software and hardware implementations—digital, analog, and mixed-signal implementations (Mehonic et al., 2024; Schuman et al., 2022). In the 1990s and 2000s, neuromorphic chip development accelerated, driven by the DARPA SyNAPSE program. Researchers explored architectures for specialized applications like pattern recognition and sensory processing, including the use of memristors for more efficient, brain-like computation (Hylton, 2007; Markram, 2012). Key developments included silicon neurons, synapse models, and large-scale systems—all embodying plasticity and learning principles.
βIn the 2010s, there was a significant focus on advancing both hardware and algorithms including IBM’s “spiking-neuron integrated circuit” TrueNorth (Merolla et al., 2014), Neurogrid (Benjamin et al., 2014), BrainScaleS (Schemmel et al., 2020), Loihi (Davies et al., 2018), or SpiNNaker (Furber et al., 2012). These platforms modeled networks of spiking neurons achieving low power consumption and real-time processing for sensory data. Parallelly, the progression toward fully digital, event-driven neuromorphic chips was advanced by SynSense’s SENECA, ReckOn, Speck, and Xylo, allowing edge-based processing for applications like object identification and sensory processing tasks (Christensen et al., 2022; Yao et al., 2024; Tang et al., 2023; Bos and Muir, 2022). Moreover, commercial neuromorphic solutions have since emerged from companies like BrainChip and Innatera, demonstrating the practical viability of low-power, on-device deployments.
Since 2020, new gradient-based training methods for spiking neural networks (Lee et al., 2016; Eshraghian et al., 2023) and real-time evolutionary optimization (Ahmadi et al., 2024) have opened the door to tasks once dominated by deep learning on GPUs. Moreover, optical “memristors” are being explored for high-bandwidth neuromorphic machine learning (Duan et al., 2024; Nirmal et al., 2024). This continuing evolution is thoroughly reviewed in recent roadmaps such as Christensen et al. (2022), showing an even clearer trajectory toward highly efficient, brain-inspired computing platforms—progress that is particularly advantageous for brain implants, which require ultra-low power consumption, minimal latency, and on-chip learning.
3 Brain implants
3.1 Overview
Brain implants are medical devices designed to interface with the brain’s bioelectrical environment by either reading neural signals to restore lost functions or modulating activity to bypass damaged pathways. Although neurons primarily communicate through electrochemical signals (action potentials and neurotransmitters), studies show that mechanical forces, glial cell interactions, and even quantum phenomena may also affect brain function (Allen and Barres, 2009; Franze, 2013; Hameroff and Penrose, 2013; Lambert et al., 2012). These insights might contribute to our understanding of the “brain code,” possibly allowing better control of the bioelectrical properties of all cells within the brain, affecting brain functions or even cellular regeneration (Boys et al., 2022; Park et al., 2025; Shim et al., 2024; Tanikawa et al., 2023; Zhao et al., 2024).
Such implants could address a variety of disruptions, including protein or ion channel dysfunction, myelin loss, mechanical trauma, and glial cell abnormalities (Roa et al., 2023; Wang et al., 2022). Examples include restoring motor control in Parkinson’s disease (where the loss of dopamine neurons impairs movement), reducing epileptic seizures through responsive neurostimulation (detecting and preventing abnormal firing), and using visual or auditory prosthetics to bypass damaged sensory pathways (Fernandez, 2018; Gupta et al., 2023; Hartshorn and Jobst, 2018). By decoding neural signals, these devices can translate an individual’s intentions into commands for controlling prosthetic limbs, restoring mobility and providing a sense of embodiment (Donati and Valle, 2024). Moreover, implants could address cognitive impairments, as shown by Schiff et al. (2023), who used thalamic deep brain stimulation in traumatic brain injury patients to improve executive functions. Das et al. (2024) showed attention mechanisms in non-human primates via LFP and spiking data, suggesting specific stimulation patterns might enhance attention regulation. Systematic reviews indicate noninvasive methods can alleviate ADHD symptoms (Yin et al., 2024), hinting at broader potential for brain implants. Because each condition has its own pathophysiological features, implants need to adapt dynamically to changing states to deliver more personalized and effective therapies. Consequently, neuromorphic algorithms and compatible hardware that model neuronal communication—and can record and stimulate neural activity—might be crucial for advancing these targeted solutions.
3.2 Brain implant workflow
Currently brain implants follow a multi-step workflow—surgical insertion into the target region, neural signal recording, on- or off-chip processing, and sometimes neuronal stimulation. By directly interfacing with the brain, they can record or modulate neural activity for therapeutic or rehabilitative purposes.
The typical workflow of a brain implant system involves:
1. Insertion of the implant: The device is surgically placed in the target brain area, with electrode placement varying based on the intended application. It is followed by a regeneration period since microglia are activated (Kozai et al., 2014).
2. Recording of neural signals: Electrodes on the implant detect the electrical activity of neurons. The resolution of this recording can vary significantly while recording:
a) Group of neurons (multi-unit activity)
• Advantages: More stable signals over time; multi-unit recordings are often less susceptible to minor electrode shifts, which can occur due to micromovements of electrodes relative to the tissue (Ramezani et al., 2024; Supèr and Roelfsema, 2004).
• Disadvantages: Less precise, may miss nuanced neural activity or possibility to directly modulate or bypass specific neurons (Gupta et al., 2020).
b) Single-neuron resolution (single-unit recordings)
• Advantages: Highest precision, allowing for detailed neural decoding depending on the number of electrodes. This enables targeted stimulation and neuron-to-neuron algorithms for data processing and stimulation (Fu et al., 2016; Gupta et al., 2020; Zhang et al., 2023).
• Disadvantages: More challenging to maintain long-term stability due to tissue response or electrode degradation (Kozai et al., 2014).
3. Processing of neural signals: The recorded data undergoes different signal processing steps to filter out noise and extract relevant features. This processing can occur in different locations:
a) On workstation processing:
• Recording: High-resolution electrodes capture neural signals.
• Processing: Data is sent to an external workstation for processing either with wire or wirelessly.
These factors can impact the system’s overall performance, reliability, and suitability for continuous, real-time neural signal processing in brain-computer interfaces (Lebedev and Nicolelis, 2006; Schalk et al., 2004).
• Advantages: High computational power. Large CPUs or GPUs can run advanced algorithms (e.g., deep learning or complex statistical methods), and workstation hardware and software are easier to update or replace (Zhang et al., 2022).
• Disadvantages: Increased latency, reliance on external devices. While reliance itself is not inherently problematic (consider how our phones rely on satellites), it introduces challenges in data transmission, power transfer, and potential data bottlenecks (Ding, 2024). When processing occurs on an external workstation, data transmission can face challenges such as:
• Signal attenuation and degradation over wireless transmission
• Bandwidth limitations affecting real-time fast processing
• Security and privacy concerns if sensitive neural data is sent to the cloud
• Increased power consumption for data transmission
• Potential loss of data during transmission
• Reliance on continuous connectivity and compatibility with external systems, risking partial or total loss of functionality if communication is disrupted (Miziev et al., 2024).
b) On-node processing:
• Recording: Electrode arrays record neural activity.
• Processing: Local processing occurs on the node, with external data transfer for further analysis or on-station synchronization.
• Advantages: Potential for reduced latency for real-time neural signal processing compared to workstation processing, though this depends on the specific chip used. Enhanced privacy, as much of the data can remain on the local device rather than being transmitted to the cloud improving reliability through reduced network dependence, which helps avoid data corruption associated with sending signals to and from external CPUs (Miziev et al., 2024).
• Enhanced privacy by keeping data local, and improved reliability through reduced network dependence, which helps avoid data corruption (Miziev et al., 2024).
• Disadvantages: Limited by node processing power. The lack of cloud connectivity means the AI might rely on a single individual’s data rather than aggregated data from many users. Additionally, the power consumption of on-board chips can limit computational speed, as higher clock speeds demand more energy and might lead to increased heat, potentially resulting in tissue damage if the temperature rises. Updating algorithms and software for implanted devices also poses challenges (Miziev et al., 2024; Serrano-Amenos et al., 2023).
c) On-implant processing:
• Recording: High-density electrodes integrated into the implant.
• Processing: All signal processing occurs on the implant with possibility for external data transfer if needed.
• Advantages: Minimal latency, real-time processing capability, and reduced data transmission requirements (Ding, 2024; Miziev et al., 2024). Additionally, patient privacy can be further protected by restricting data flow to the implant itself.
• Disadvantages: Severely limited by power and size constraints of the implant and potentially increased temperature in the brain environment. The implant could process approximately 2 GB of data locally, eliminating the need for time-consuming data transmission (*at 1,024 channels sampled at 20 kHz with 10-bit resolution, a minute of uncompressed data could reach about 2 GB). However, due to power restrictions and safety regulations, the on-board chip operates at a slower speed than external processors. While sending data externally would incur transmission delays, external devices can still process data much faster (Musk, 2019; Stanslaski et al., 2012; Figure 1).
4. Stimulation: Based on the processed information, implant may stimulate specific brain regions. The decision to stimulate and the parameters of stimulation (frequency, intensity, duration) can be determined either by on-implant algorithms or by external systems, depending on the implant’s design and capabilities.

Figure 1. Illustrates the signal processing workflow for brain implants across the three tiers: external (on-workstation), on-node, and fully on-implant processing. Each tier highlights differences in hardware, bandwidth, tasks, and trade-offs between latency, power, and computational efficiency.
3.3 Current limitations
3.3.1 Algorithms and dynamic neural environments
Many brain implants use machine learning methods (e.g., deep neural networks, RNNs, SVMs) to interpret neural activity and enable communication with external devices (Chapin et al., 1999; Hochberg et al., 2012). Although these techniques have proved beneficial (e.g., in Neuralink prototypes such as the pig demo and monkey cursor control), they often rely on relatively simple models and fixed stimulation parameters that may not suit rapidly changing conditions of living neural tissue, its plasticity, or fluctuations in the brain’s state over a human lifetime (Gulino et al., 2019; He et al., 2020; Musk, 2019).
3.3.2 Data rates, compression, and transmission
Another substantial issue is data compression. The amount of neural data processed by a brain implant can vary widely, depending on factors like the number of electrodes, the sampling rate, spatial and temporal resolution (fine-grained control for individual neurons), and whether the device monitors spike-level signals (sampled at 20–30 kHz to resolve individual action potentials) or local field potentials (LFPs). Issues such as data compression or wireless transfer can slow real-time feedback for rapid neural events (Gilja et al., 2012; Harrison, 2008; Liu et al., 2020; Prosky et al., 2021). At the lower end, a system with a handful of electrodes recording at a few kilohertz might handle kilobytes to megabytes of data per second, whereas a high-density array with hundreds or thousands of channels could generate tens to hundreds of megabytes per second.
For instance, Neuralink generates approximately 200 Mbps of electrode data from its high-density implant but can only transmit 1–2 Mbps via Bluetooth—meaning the implant operates at about 0.5% of its potential. Achieving the required compression ratio of over 200× and is often handled by feature extraction or on-chip compression—yet these techniques do not fully maximize the implant’s capabilities. Furthermore, these systems often struggle with power consumption, size constraints, and the need for external processing units, which can limit their practicality and scalability (Gulino et al., 2019; Miziev et al., 2024; Neuralink Compression Challenge, 2025).
3.3.3 Materials and biocompatibility
Additionally, material constraints, thermal noise, and the possibility of requiring immune-suppressing therapies can affect data quality, device performance, and patient well-being (Miziev et al., 2024). Some wound-healing and material research suggests that specific stimulation patterns may enhance healing speed—possibly reducing implant rejection or tissue scarring—lowering the degradation of signal quality and improving the precision of neural recording, although these methods have not been extensively applied in clinical practice (Boys et al., 2022; Fani et al., 2023; Miziev et al., 2024). Polymer coatings, hydrogel encapsulation, and flexible bioelectronics aim to reduce foreign-body responses and provide stable, long-term performance for neuromorphic implants (Hwang et al., 2015; Polikov et al., 2005; Qi et al., 2023; Salatino et al., 2017).
3.3.4 Scalability, complexity, adaptability
Most current implants remain relatively large, limiting improved spatial or temporal resolution (Miziev et al., 2024). The complexity of neural signals—firing patterns, synaptic plasticity, network-level feedback—requires precise placement and calibration to avoid undesired side effects, such as cognitive dysfunction or behavioral changes (Valle et al., 2024). Many devices do not self-adjust to evolving neural conditions without external control, reducing their capacity to stay aligned with the user’s needs in the long term, especially as the brain’s functional state changes due to learning, aging, or disease progression (Shanechi et al., 2016; Zhou et al., 2023).
Addressing these issues is important for advancing brain implants. Bio-inspired robotics, improved materials design, and neuromorphic computing—with event-driven, efficient on-chip processing—may pave the way to adaptive, biologically informed systems (Chiappalone et al., 2022; Qi et al., 2023; Zhang et al., 2024). Fully implantable devices could reduce latency and enhance real-time interactions, though obstacles such as energy management, heat dissipation, and data compression methods remain unsolved (Chiappalone et al., 2022; Zhang et al., 2022).
4 Neuromorphic computing for brain implants
4.1 Brain-inspired foundations of neuromorphic computing
The human brain is a computational marvel, reaching exaflop-scale performance while consuming just ~20 watts (Yao et al., 2024). This efficiency contrasts sharply with traditional computing architectures, which are approaching physical limits. Moore’s Law, predicting a doubling of transistors every 2 years, is slowing due to transistor miniaturization constraints (Schaller, 1997), and Dennard scaling, which maintained power density as transistors shrank, is also faltering, complicating efforts to boost performance without sacrificing energy efficiency (Horowitz, 2014). These trends highlight the brain’s computational edge and the need to draw inspiration from its design.
Key features of the brain’s paradigm include:
• Massive parallelism: Unlike the sequential processing of traditional computers, the brain handles distributed, simultaneous computations across billions of neurons (Schuman et al., 2017). While architectures like SIMD, MIMD, or Dataflow can excel at specific tasks (e.g., image processing, matrix operations), they still lack the versatility and energy efficiency of the brain’s parallel computation across a wide variety of tasks from sensory processing to abstract reasoning (Hennessy and Patterson, 2011; Roy et al., 2019).
• Integrated memory and processing: Traditional von Neumann architectures separate memory and processing, creating bottlenecks absent in the brain, where computation and memory are believed (until now) to be integrated at the synaptic level (Isik et al., 2024; Kastellakis et al., 2015).
• Adaptability and learning: The brain’s real-time adaptability and plasticity outstrip machine learning algorithms, which struggle with catastrophic forgetting and lack comparable energy efficiency despite for example incremental learning (Kirkpatrick et al., 2017; Aleixo et al., 2024; Sadegh-Zadeh et al., 2024).
• Fault tolerance: Biological networks remain functional despite significant neuron loss, as seen in Alzheimer’s patients retaining abilities with reduced brain mass (Su et al., 2016). Traditional systems, however, are vulnerable to single points of failure, though efforts using genetic algorithms aim to address this (Zlokapa et al., 2022; Su et al., 2016).
• Handling noisy data: The brain processes noisy, incomplete inputs effortlessly, using mechanisms like sensory substitution. In cases where one sensory modality is impaired, the brain can rewire itself to process information from other senses to compensate for the loss (Merabet and Pascual-Leone, 2009). While there have been advancements in machine learning, traditional systems still mostly require precise data (Caiafa et al., 2021).
Neuromorphic computing largely addresses these issues. While the Harvard architecture already separates data memory and program memory (Hennessy and Patterson, 2011), neuromorphic approaches offer support for dynamic and real-time data processing at high throughput and low energy consumption, avoiding continuous data transfers between discrete memory and processing units. This reduces the bandwidth limitations of current technologies. Compression methods in neuromorphic systems focus on essential spikes or relevant features, reducing data size and lessening the load on external devices or wireless links (Roy et al., 2019; Lee and Lee, 2020). Because information is handled locally, these pipelines can cut back on unnecessary transfers, enhance real-time compression, and improve overall performance (Schuman et al., 2017; Rhodes et al., 2019).
Neuromorphic systems can improve spatial and temporal resolution through bio-inspired architectures with high-density, low-power processing units, reflecting the brain’s ability to process information at multiple scales. This leads to improved spatial resolution by better mapping—fitting more sensors or compute elements into smaller areas—and boosts temporal resolution through parallel, event-driven operations, allowing real-time monitoring and minimal-latency stimulation of neural activity (Hall et al., 2012; Gonzalez et al., 2024; Peres and Rhodes, 2022). Such responsiveness is crucial for effective neuroprosthetic control or closed-loop interventions with quick feedback (Niu et al., 2020).
In a prosthetic limb, for instance, a neuromorphic processor can interpret signals from sensory receptors instantly and adjust motor commands in actuators, resulting in smooth, natural movements aligned with the body’s reflex responses—especially in dangerous situations where fight-or-flight responses matter (Niu et al., 2020; Song et al., 2024).
A key feature of these systems is event-driven computation, where processing occurs only in response to significant input changes or ‘events’, rather than continuous operation (Ji et al., 2023; Shahsavari et al., 2023). This approach has been deployed in vision-processing tasks on neuromorphic platforms like Speck where real-time, event-based sensing enables low-latency, energy-efficient object recognition (Yao et al., 2024). This approach is particularly effective in managing temporally sparse activity, which is useful in various applications, such as detecting rare events or monitoring long-term trends, ensuring that the system remains efficient and responsive only when necessary (Aboumerhi et al., 2023). This also could make the neuromorphic design more energy-efficient and improve power consumption, minimizing the thermal impact on surrounding brain tissues.
Incorporating synaptic plasticity mechanisms, particularly spike-timing-dependent plasticity (STDP), allows neuromorphic systems to learn and adapt from new stimuli (Bill and Legenstein, 2014). This learning capability, together with the systems’ inherent parallel processing, strengthens pattern recognition and sensory data handling—advantageous in plastic neuronal environments (Zhu et al., 2021; Büchel et al., 2022a,b). Recent research has extended on-device computing capabilities, achieving state-of-the-art performance in real-time audio tasks (e.g., speech recognition) and vision tasks (e.g., object detection), all with notable energy efficiency (Yao et al., 2024; Yik et al., 2025). The scalability of neuromorphic computing has been demonstrated in projects like SpiNNaker, which can simulate millions of neurons in real time and simultaneously across multiple chips (Furber et al., 2012; Davies et al., 2018). This scalability spans from small, energy-efficient sensors to comprehensive neural networks modeling complex behaviors. For example, DeWolf et al. (2016) employed spiking neural networks for robotic arm control, while others have explored neuromorphic sensory systems (Liu and Delbruck, 2010), speech recognition (Xiang et al., 2023), and energy-efficient image classification (Pawlak et al., 2024).
Additionally, neuromorphic systems often include elements of stochasticity, reflecting the probabilistic character of biological neural networks (Petrovici et al., 2016). This enhances robustness and adaptability to uncertain, variable environments. For example, a recent study on neuromorphic-based closed-loop neuroprostheses (Chiappalone et al., 2022) describes how real-time data processing, energy efficiency, and bio-inspired computation can help reestablish or substitute injured neural pathways, going beyond sensory or motor restoration and potentially enabling direct brain-level repair. The broad applicability of neuromorphic algorithms aligns with the type of information processing found in the brain. However, as the field advances, the focus is expanding beyond just hardware. There is a growing need to integrate neuromorphic systems with algorithms and real-world applications, increasing our understanding of neuronal communication models to allow for the development of even more advanced neural interfaces (Furber, 2016).
4.2 Mixed-signal design in neuromorphic systems
Mixed-signal design is one of the techniques improving the efficiency of neuromorphic computing, integrating both analog and digital circuitry to mimic the brain’s information processing. This approach combines the flexibility and precision of digital systems with the energy efficiency and continuous-time processing capabilities of analog circuits. Quan et al. (2023) demonstrated the effectiveness of mixed-signal neuromorphic circuits in implementing energy-efficient and space-efficient Spiking Neural Networks (SNNs) using 55 nm CMOS technology. The integration of analog and digital components allows neuromorphic systems to better model the brain’s parallel processing and adaptive learning capabilities while maintaining computational efficiency. For instance, Benjamin et al. (2014) developed Neurogrid, a mixed-analog-digital multichip system for large-scale neural simulations, showing the potential of this approach.
4.2.1 Benefits of mixed-signal design
Analog circuits are highly energy-efficient, often performing specific computations with far lower power consumption than digital alternatives. Their continuous-time nature allows real-time processing of sensory inputs, which aligns with how biological systems operate (Indiveri et al., 2011). Additionally, the high density of neural elements in mixed-signal circuits supports more compact designs for neuromorphic systems (Moradi and Indiveri, 2013).
4.2.2 Challenges of mixed-signal design
However, analog components are inherently more sensitive to noise and environmental variations, which can impact system reliability (Qiao et al., 2015; Schuman et al., 2017). Recent efforts have begun to address issues such as mixed-signal mismatch, particularly during training, through techniques like mismatch-aware training algorithms (Büchel et al., 2022a,b) and improved circuit design methodologies (Murray and Edwards, 1994). Addressing these issues is crucial for the future development of mixed-signal neuromorphic architectures (Roy et al., 2019).
4.3 Algorithm to hardware conversion trade-offs
Neuromorphic hardware can be categorized into analog, digital, or mixed-mode (analog/digital) systems. While analog designs offer benefits such as a smaller footprint and lower power requirements, digital approaches tend to be more adaptable and cost-effective, for example, for running large-scale SNN models (Indiveri et al., 2011; Seo and Seok, 2015). For instance, TrueNorth (Merolla et al., 2014) and Loihi (Davies et al., 2018) each exemplify large-scale digital neuromorphic chips, achieving energy efficiency and event-driven spiking at scale. Small-scale digital neuromorphic processors have also gained attention for their potential in edge computing applications, offering low-power, real-time processing capabilities (Yik et al., 2025). In addition, mixed-signal designs combine analog front-ends with digital back-ends, supporting continuous-time processing (Roy et al., 2019).
The traditional approach of using high-level programming languages like Python for neuromorphic algorithm development, followed by conversion to hardware, still comes with its challenges. Schuman et al. (2022) emphasize the importance of co-designing algorithms and hardware to fully benefit from the characteristics of neuromorphic systems. Direct implementation of neuromorphic algorithms on specialized hardware, rather than relying on software intermediaries, can lead to substantial improvements in energy efficiency and processing speed. This is particularly relevant in neurotechnology applications, where real-time processing of neural signals is crucial (Furber et al., 2012). However, recent work with field-programmable gate arrays (FPGAs) has shown potential for neuromorphic solutions. A study by Zhang et al. (2019) reported speed-ups compared to CPU implementations and lower power consumption compared to GPU-based systems when SNNs were placed directly on FPGAs (Javanshir et al., 2022). Benchmarking efforts, such as the NeuroBench project (Yik et al., 2025) and edge audio evaluations (Bos and Muir, 2024), show the efficiency of small-scale digital neuromorphic processors in low-power, on-device sensory processing, including real-time audio (e.g., speech recognition) and vision (e.g., object detection) tasks. These findings show their potential for power-sensitive applications requiring minimal latency and real-time performance such as implants.
5 Neuromorphic algorithmic approaches for brain implants
Neuromorphic computing can address existing brain-implant limitations, offering an approach that improves energy efficiency, information transfer, and adaptive behavior—possibly including future memory storage. We reviewed a range of neuromorphic algorithms, from traditional ones and potential hybrid methods to neurocomputational models not yet implemented. We showed how to optimize them for the demanding requirements of brain implants: real-time processing, low power usage, and adaptive learning in complex, noisy environments.
5.1 Spiking neural network algorithms
5.1.1 Fundamentals of SNNs
Spiking Neural Networks (SNNs) have significant relevance in the development and implementation of brain implants due to their ability to model natural neural processes, offering several advantages for brain-computer interfaces (BCIs), such as decoding neural signals, sensory substitution, or better personalization (Roy et al., 2019; Liao et al., 2024). The conceptual roots of SNN algorithms trace back to the mid-20th century, inspired by the work of neuroscientists like Alan Lloyd Hodgkin and Andrew Huxley (Hodgkin and Huxley, 1952). The formal introduction of SNNs as we know them today is often attributed to Wulfram Gerstner and his colleagues in the 1990s. Gerstner’s work on the Spike Response Model (SRM) in 1995 provided a framework for describing the behavior of spiking neurons mathematically (Gerstner, 1995).
However, the term “Spiking Neural Network” gained prominence in the late 1990s and early 2000s, with papers by Wolfgang Maass and William Bialek contributing significantly to the field (Maass, 1996; Bouvier et al., 2019). This coincided with advancements in VLSI technology that made it feasible to implement large-scale spiking networks in hardware (Indiveri et al., 2009; Indiveri and Liu, 2015). SNNs closely model the information processing mechanisms of biological neurons through several key principles:
1. Membrane potential dynamics: Each artificial neuron in an SNN maintains a membrane potential, which is a time-varying state variable (Izhikevich, 2003). The membrane potential changes in response to input spikes and decays over time when no input is received. In brain implants, this might allow for more natural interaction with surrounding biological neurons (Zhang et al., 2021).
2. Threshold and spiking: When the membrane potential exceeds a certain threshold, the neuron “fires” or emits a spike. After firing, the neuron enters a refractory period during which it is less likely or unable to fire again (Zhang et al., 2021; Guo et al., 2022). For brain implants, this thresholding mechanism provides a natural way to filter out noise and focus on significant neural events, improving signal quality and reducing power consumption (Shah et al., 2024).
3. Temporal integration: Neurons integrate incoming spikes over time, allowing them to process temporal patterns in the input data (Takaghaj and Sampson, 2024). This helps brain implants interpret and respond to complex neural signals more accurately. Additionally, temporal integration stores some representation of the signal in an analog form: the membrane voltage (and its fluctuations) encodes input amplitude and timing, functioning like parallel weighting coefficients in machine learning. This parallel processing capability is important for brain implants, allowing them to handle the massive parallelism of neural computations efficiently (Peres and Rhodes, 2022; Müller et al., 2023).
The dynamics of a spiking neuron can be described mathematically using differential equations. One popular model is the Leaky Integrate-and-Fire (LIF) neuron, which could be used for efficient memory, as noted by Kim et al. (2023). The dynamics of the LIF neuron are governed by the following equation, which models the evolution of the membrane potential over time:
Where:
• (V): Membrane potential
• Vrest: Resting potential
• tau: Membrane time constant (tau = R C, where (C) is the membrane capacitance)
• (R): Membrane resistance
• (I(t)): Input current
• When (V) reaches the threshold Vth, a spike is emitted, and (V) is reset to Vreset.
In the context of brain implants, the LIF model provides a framework for simulating and interpreting neural signals. For example, the membrane time constant (tau) determines how quickly a neuron responds to stimuli, which can be tuned to match biological neural processing speeds, approximating different biological neurons. This is important for applications like prosthetics, where accurate and timely decoding of neural signals is essential for smooth motor control (Donati et al., 2019; Donati and Valle, 2024).
5.1.2 Information encoding in SNNs
The concept of encoding information in spike timing and frequency, rather than continuous values, has its roots in the study of biological neurons. This shift in perspective arose as neuroscientists investigated how the brain represents and processes information. Work in this field includes Eric Kandel’s research in the 1960s on synaptic transmission and plasticity, David Marr’s theories on neural computation in the 1970s, and Moshe Abeles’ exploration of precise spike timing in the 1980s (Kandel and Spencer, 1968; Kandel, 1976; Marr, 1969). These foundational studies laid the groundwork for the encoding strategies now used in Spiking Neural Networks (SNNs).
In SNNs, information can be encoded in several ways, such as Time-to-First Spike (TTFS) coding, Phase coding, and Burst coding, as well as the following:
1. Rate coding: The average number of spikes over a time window represents the intensity of a signal. In brain implants, rate coding can be used to interpret sensory or motor signals (Liu and Delbruck, 2010; Guo et al., 2021). For example, the frequency of neural spikes detected in the motor cortex could be translated into the strength or speed of movement in a robotic limb (Gupta et al., 2023; Chapin, 2004).
2. Temporal coding: The exact timing of spikes carries information. Temporal coding is critical for real-time processing in brain implants, enabling precise interpretation of rapidly changing neural signals (Cariani, 2001). This could be especially useful for applications like auditory prosthetics, where timing plays a key role in speech recognition (Aldag and Nogueira, 2024; Saddler and McDermott, 2024).
3. Rank order coding: The order in which neurons in a layer fire encodes information. Rank order coding can improve efficiency in brain implants by prioritizing the most significant neural inputs, reducing computational overhead while maintaining accuracy. This approach is particularly beneficial in energy-constrained systems like neural prosthetics (Loiselle et al., 2006).
5.1.2.1 Practical data-to-spike conversion
Rate/temporal/rank order describe intrinsic coding strategies in SNNs (or biological systems), while binning/spike-count/charge-injection are applied methods for converting external signals to spikes. This aspect of input encoding is often overlooked: translating numerical data (e.g., raw sensor signals) into spikes so that SNNs can process them. Recent work compares binning, spike-count encoding, charge-injection, and more complex hierarchical strategies, demonstrating that the best input encoding depends heavily on the task and hardware (Schuman et al., 2019). For brain-implant scenarios, choosing an appropriate encoding method can significantly impact power usage, latency, and overall decoding accuracy—potentially just as critical as the learning algorithms themselves.
5.1.2.2 ANN-to-SNN conversion
It offers a complementary approach for creating spiking networks. Instead of relying on spike-based encoding from the outset, this technique converts pre-trained artificial neural networks (ANNs) into spiking equivalents—preserving learned weights and architecture (Wang et al., 2023). Such conversion could potentially allow the use of existing, highly effective deep learning models in brain implants while benefiting from the energy efficiency of SNN implementations (Yamazaki et al., 2022). For instance, an ANN trained for speech recognition could be converted to an SNN, potentially enabling low-power, real-time decoding of auditory signals in a cochlear implant. Researchers have demonstrated promising ANN-to-SNN conversions for image classification (Rueckauer et al., 2017), which could be adapted for neural image input in visual implants, though this specific application has yet to be tested.
5.1.3 Learning and optimization algorithms
Developing efficient learning and optimization algorithms is critical for enabling neuromorphic computing in brain implant chips, where on-chip computing must balance power efficiency, real-time processing, and adaptability.
5.1.3.1 Spike-timing-dependent plasticity
STDP is often regarded as an important learning rule in Spiking Neural Networks (SNNs), particularly for brain implants seeking a biologically plausible way to adapt (Indiveri and Liu, 2015; Lan et al., 2021). By adjusting synaptic strengths based on the timing of pre- and postsynaptic spikes, STDP allows implants to modify responses to a user’s neural signals over time.
where ΔW is the change in synaptic weight and Δt is the time difference between spikes. Parameters (A+, A−, τ+, τ−) shape the STDP curve and can be adjusted for the desired balance between strengthening or weakening connections based on the user’s neural activity.
Recent work by Subramoney et al. (2024) presents a new perspective on SNN learning: fast adaptation that does not rely entirely on synaptic plasticity. Instead, it draws on a combination of slower plastic changes and faster network dynamics, incorporating biologically inspired elements such as spike frequency adaptation (SFA)—observed in a significant portion of cortical neurons. This approach allows SNNs to learn in a single trial, aided by recurrent connections that support key network behaviors. Synaptic weights still represent broader information (such as priors or task structures), but the system’s adaptive properties stem more from real-time dynamics and SFA than from plasticity-based rules like STDP.
5.1.3.2 SpikeProp
Introduced by Bohte in 2002, SpikeProp is a gradient-based learning algorithm for spiking networks that adjusts synaptic weights according to the timing of individual spikes. It could enable brain implants to learn complex mappings—such as translating neural signals into motor control for a prosthetic hand—using supervised learning approaches (Bohte et al., 2000). For on-chip computing, its timing-based update is essential for real-time processing of complex neural signals in tasks such as motor control or sensory processing (Shrestha and Song, 2014). Recent advancements have extended SpikeProp with event-based update algorithms that enable exact gradient computation, improving training accuracy and efficiency (Wunderlich and Pehle, 2021).
5.1.3.3 ReSuMe (remote supervised method)
Introduced by Filip Ponulak in 2005, ReSuMe combines STDP with supervisory signals (Ponulak, 2005; Ponulak and Kasiński, 2009). In contrast to purely Hebbian or STDP-based approaches, ReSuMe adjusts synaptic weights to minimize the timing discrepancy between the network’s output spikes and the target output pattern. This makes it well-suited to scenarios where a reference or “correct” spiking pattern is available. ReSuMe could help implants learn from external feedback, such as from a physical therapist during rehabilitation, to improve neural signal decoding over time. As the user would practice a motor task (e.g., hand movements), the implant’s spiking network receives corrective signals that guide STDP adjustments, refining neural signal decoding over time continuously aligning the SNN’s spike outputs.
5.1.3.4 Spike-based backpropagation and BPTT for SNNs
Recent work by researchers like Zenke and Ganguli (2018) applies backpropagation-like learning (through surrogate gradients) to spiking neural networks (SNNs), allowing them to learn complex tasks while preserving energy efficiency—essential for brain implants (Gygax and Zenke, 2024). This approach adapts traditional gradient-based methods to handle the discrete, event-driven nature of SNNs, enabling efficient training for applications like neural signal decoding and personalized therapies (Eshraghian et al., 2023; Lee et al., 2016). A closely related approach, Backpropagation Through Time (BPTT), extends these methods by modeling the temporal dynamics of SNNs, making it especially suitable for personalized therapies or rehabilitation (Bird and Polivoda, 2021; Nápoles et al., 2024). By adjusting synaptic weights based on user-specific neural activity patterns, BPTT could potentially optimize interventions such as deep brain stimulation for Parkinson’s disease, where precise timing matters for symptom management.
Additionally, other forms of plasticity and learning rules are being explored in SNNs. For example, some research focuses on unsupervised feature learning with winner-takes-all-based STDP (Ferré et al., 2018), while others investigate the dynamics of phase oscillator networks with synaptic weight and delay plasticity (Chauhan et al., 2022).
5.2 Advanced SNN algorithms and hardware implementation
Neuromorphic computing algorithms, particularly those based on Spiking Neural Networks (SNNs), show great promise for applications in brain implants. These algorithms model the brain’s natural processing mechanisms, offering potential improvements in efficiency, adaptability.
5.2.1 Models
1. Spiking convolutional neural networks (SCNNs) adapt CNNs into a spiking format—typically via rate or temporal coding—for low power, real-time processing of visual or sensory inputs (Kheradpisheh et al., 2017). For example, SCNNs can transform retinal implant signals into spike patterns interpretable by the brain (Yu et al., 2020), forming a basis for edge vision–based visual prostheses (Yao et al., 2024). An unsupervised SCNN approach has also bridged the gap between artificial and biological neurons by extracting image features and using receptive field–based regression to predict fMRI responses (Wang et al., 2023). Another study shows SCNNs detecting anticipatory slow cortical potentials for braking intention via EEG, outperforming standard CNNs, EEGNet, and graph neural networks with over 99% accuracy (Lutes et al., 2024; Kheradpisheh et al., 2017). This suggests strong potential for real-time motor control in driver assistance or prosthetic applications.
2. Spiking recurrent neural networks (SRNNs) integrate spiking neurons with recurrent architectures to process temporal neural signals, maintaining an internal state for tasks like predicting speech or motor patterns (Bohnstingl et al., 2022; Yamazaki et al., 2022). They could enable closed-loop systems to anticipate epileptic seizures with targeted, energy-efficient stimulation, though clinical efficacy requires further study. Adaptive SRNNs, with multiple timescales and self-recurrent parameters, match or surpass classical RNNs in sequential tasks, offering sparse spiking and > 100x energy savings (Yamazaki et al., 2022), making them ideal for motor-control implants and real-time monitoring on neuromorphic hardware (Willsey et al., 2022; Samee et al., 2022).
3. Spiking Feed-forward Neural Networks (SFNN) trained with gradient-descent methods enable efficient pattern recognition and sensory processing while remaining compatible with low-power hardware (Bauer et al., 2023). Using surrogate gradients and temporal coding, these networks approximate continuous derivatives to backpropagate errors effectively, achieving high accuracy in tasks like visual classification with energy-efficient sparse spiking (Zenke and Ganguli, 2018). This makes them well-suited for brain implants decoding sensory inputs in real time (Contreras et al., 2023).
5.2.2 Neuromorphic-specific technologies
Real-world deployment of brain implants demands low-power, adaptive, and reliable hardware. Technologies such as event-based processing, memristive learning, and non-volatile memory (NVM) offer essential solutions to these challenges.
5.2.2.1 Event-based algorithms
Event-based algorithms process data only when specific events (e.g., spikes) occur, reducing power usage and extending device operational time. This approach is similar to the behavior of biological neurons, which fire only upon receiving significant inputs. For example, Posch et al. (2014) demonstrated a retinomorphic event-based vision sensor adaptable to various sensory modalities, including auditory signals, for energy-efficient, real-time implant applications. Event-driven spiking CNN hardware, such as the Speck platform, further enhances this capability by enabling low-latency, energy-efficient processing of sensory data, making it ideal for real-time brain implant applications (Yao et al., 2024).
5.2.2.2 Memristive learning algorithms
Memristive learning algorithms use memristors—devices that retain their resistance state when powered off—to implement synaptic plasticity directly in hardware. This design enables on-the-fly learning with low latency and resembles the way biological synapses adjust their strength over time (Boybat et al., 2018; Huang et al., 2023). Multiple studies demonstrate the use of memristors to implement synaptic plasticity in hardware. For example, BiFeO₃ (BFO)-based memristive devices have been shown to support various long-term plastic functions, including spike timing-dependent plasticity (STDP), cycle number-dependent plasticity (CNDP), and spiking rate-dependent plasticity (SRDP) (Du et al., 2021). Moreover, the TS-PCM device demonstrates the ability to modulate its behavior based on stimulus history, similar to neuronal plasticity (Sung et al., 2022). These findings suggest the potential for devices that can adjust to a user’s neural patterns.
5.2.2.3 Non-volatile memory technologies
NVM technologies provide efficient, persistent storage for synaptic weights. Three key NVM types with potential for brain implants are:
• Phase-change memory (PCM): Uses chalcogenide materials that switch between amorphous and crystalline states to store data. Its capacity to represent multiple resistance states allows for analog-like computation and persistent synaptic storage (Burr et al., 2016).
• Resistive RAM (RRAM): Also known as memristive memory, it changes resistance based on applied voltage, simulating synaptic plasticity. Its scalability and non-volatility suit high-density synaptic storage in space-constrained brain implants (Li et al., 2023; Wan et al., 2022).
• Ferroelectric RAM (FeRAM): Employs ferroelectric materials to store data without power, featuring fast read/write speeds and high endurance. However, integrating FeRAM into silicon remains a challenge as their chemical properties vary, potentially causing unwanted reactions (Mehonic et al., 2024).
These NVM technologies retain learned synaptic patterns without continuous power, reducing energy consumption and improving reliability for devices such as neural decoders in prosthetics or memory-enhancement implants However, their application in brain implants requires further research on biocompatibility, long-term stability in biological environments, and integration with neural tissue (Rathi et al., 2022).
5.3 Emerging and theoretical models
5.3.1 Multimodal learning algorithms
Multimodal learning algorithms implemented in neuromorphic chips show great promise for brain-computer interfaces and intelligent robotics. They enable the simultaneous processing of various sensory inputs (e.g., visual, auditory, and tactile), modeling the human brain’s ability to integrate multiple sensory modalities (Krauhausen et al., 2024; Li et al., 2024a,b). Recent advancements include the development of artificial synapses capable of handling multiple stimuli, allowing parallel in-memory computing and low-energy AI processing (Li et al., 2024a,b). A bio-inspired approach using organic neuromorphic circuits has demonstrated real-time multimodal learning in robotic systems, enabling intelligent environmental interaction suggesting potential applications in sensory substitution for brain implants (Krauhausen et al., 2024). Despite difficulties of integrating multiple data streams within implant constraints, multimodal neuromorphic systems have potential to improve data comprehension, performance, and adaptability for sensory substitution or augmentation.
5.3.2 Liquid state machines and echo state networks
LSMs and ESNs both belong to reservoir computing and can be adapted for spiking (neuromorphic) hardware. They enable energy-efficient, real-time processing of spatio-temporal data, useful for applications like seizure prediction or prosthetic control. Studies on SpiNNaker and Loihi-2 show high accuracy in visual classification tasks with low power usage (Patiño-Saucedo et al., 2022; Pawlak et al., 2024); for instance, one LSM reached 91.3% on CIFAR-10 at 213 μJ/frame (Pawlak et al., 2024). ESNs have also been implemented in memristor crossbar arrays, leveraging neuromorphic parallelism and efficiency (Hassan et al., 2017). Recent advances include modular ESNs for EEG-based emotion recognition, achieving improved performance without additional neural adaptation, suggesting potential for brain implants requiring real-time signal interpretation (Yang et al., 2024).
5.3.3 Liquid neural networks
Liquid Neural Networks (LNNs), inspired by biological systems, incorporate differential equations into their activation functions to better describe neuronal membrane dynamics. Their adaptive design could make implants more versatile for changing brain needs, enabling continuous, label-free learning. LNNs have shown promise in robotics, autonomous vehicles, and healthcare, where Closed-Form Continuous-Time LNNs (CfCs) enable real-time analytics of complex patient data for earlier diagnoses (Nye, 2023).
6 Neurocomputational models of neuronal communication
Neurocomputational models describe computational principles and structures that model the human brain’s neural architecture. Although computational neuroscience traditionally emphasizes biologically plausible models and detailed physiology, it also inspires (and is inspired by) broader fields like connectionism, control theory, and machine learning (Davison and Appukuttan, 2022). For instance, convolutional neural networks (CNNs)—inspired by the visual cortex—have been adapted into spiking CNNs (SCNNs) for sensory prosthetics, including retinal and cochlear implants (Lindsay, 2020; Büchel et al., 2022a,b; Alsakkal and Wijekoon, 2025).
Neuromorphic computing shares the core goal of replicating the brain’s efficiency and adaptability. While medical applications (e.g., brain implants) could particularly benefit from this synergy due to requirements for low-power, real-time operation, neuromorphic systems also have broader uses in general AI and robotics (Donati and Valle, 2024; Schuman et al., 2022). There are various ways to categorize neurocomputational models—such as single-neuron modeling, neuron–glia interactions, and sensory processing (Linne, 2024; Jiang et al., 2024; Herz et al., 2006). However, in this chapter, we will follow the five-level framework proposed by Herz et al. (2006), as it clearly illustrates how models range from high-fidelity (detailed compartmental) to purely functional (black box), showing the trade-offs in complexity, efficiency, and applicability for potential use on neuromorphic chips for single-neuron dynamics.
6.1 Models
6.1.1 Detailed compartmental models (level 1)
Detailed compartmental models subdivide a neuron into many sections (compartments) to represent how its spatial structure affects electrical and chemical activities (Koch, 1998). They typically rely on anatomical reconstructions, ensuring that features like dendritic branches, axons, and ion channel distributions are included in a realistic way. These approaches build upon Rall’s cable theory, which mathematically showed that voltage attenuation in dendrites spreads asymmetrically (Herz et al., 2006; Rall, 1977; Mainen and Sejnowski, 1996). By using numerical integration across many compartments, these models can reflect complex biophysical details, including active dendritic currents (e.g., calcium spikes) and backpropagation of action potentials (De Schutter and Bower, 1994; Roth and Bahl, 2009). However, modeling large dendritic trees may require over 1,000 compartments, leading to a very high-dimensional system of equations (Amsalem et al., 2020).
6.1.1.1 Examples
• Multi-compartmental neuron models
Used in tools like NEURON, emphasizing accurate geometry and ion channel placement (Friedrich et al., 2013).
• Cable Theory–Based Models
Extend Rall’s equations to cover voltage and current flow along dendrites and axons.
• Thalamocortical neuron models
Enhanced over time with additional ion channels (e.g., dendritic calcium currents) to study fast oscillations or pathological rhythms (e.g., in sleep disorders) (Wang et al., 2022; Destexhe et al., 1998).
6.1.1.2 Key features
• Morphological details
Incorporate anatomical reconstructions to see how shape and structure influence neuronal activity.
• High fidelity
Reflect ion channel variability, dendritic integration, and axonal propagation with considerable detail.
• Numerical complexity
Require solving large sets of differential equations, especially when dendritic trees are extensive (Ben-Shalom et al., 2021).
• Mechanistic insights
Can produce testable ideas about how certain firing patterns or oscillations arise (e.g., Purkinje cell simulations suggesting an inhibitory current behind specific spike patterns) (Lumer, 1997; Santoro et al., 2024).
6.1.1.3 Potential applications for brain implants
1. Precision in stimulation
By modeling how voltage spreads across dendrites and soma, these models could help predict where an implant’s electrical pulses might have the strongest effect (e.g., in Parkinson’s DBS). However, running such detailed computations in real time on an implant could be impractical due to the heavy processing load (Bingham et al., 2018; McIntyre and Foutz, 2013).
2. Understanding neural disorders
Disease-linked alterations (e.g., modified ion channel conductance in epileptic tissue) could be studied in a spatially precise manner, supporting customized stimulation strategies (Suffczynski et al., 2004). However, detailed pathological modeling still demands high-end computing resources, making on-chip simulations unlikely.
3. Predicting extracellular stimulation effects
Because these models show how electrical fields interact with the neuron’s shape, they could suggest how implants should deliver pulses for maximum benefit in treatments like DBS. The challenge might arise as calculating the effects across many neurons or an entire region can rapidly exceed computational limits (Hussain et al., 2024; Yousif and Liu, 2007).
6.1.1.4 Constraints
These models often require high-performance computing or offline simulations, making them unsuitable for real-time neuromorphic hardware or large-scale networks (Amsalem et al., 2020). Moreover, including full dendritic and axonal detail for every neuron in a network is generally infeasible. Although detailed compartmental models do guide design and optimization, their detailed simulations continue to be impractical for on-chip use. As a result clinicians and engineers typically employ them offline to optimize factors such as electrode placement or stimulation patterns, then transfer simplified models or empirically derived parameters to the actual device.
6.1.2 Reduced compartmental models (level 2)
Reduced compartmental models model the spatial details of neurons but still include key biophysical elements (e.g., voltage-dependent currents, somatodendritic interactions). They represent a compromise between the high detail of Level I (detailed compartmental) models and the computational simplicity of Level III (single-compartment) models. By keeping a limited number of compartments—often two or three—they provide more biological details than single-compartment approaches, yet they remain easier to analyze than fully detailed simulations (Herz et al., 2006; Izhikevich, 2006).
6.1.3 Examples
• Leaky integrate-and-fire (LIF) neuron model
Focuses on membrane leakage and spike generation with fewer parameters than more detailed models such as Hodgkin–Huxley (Hodgkin and Huxley, 1952).
• Izhikevich neuron model
Uses a minimal set of equations to reproduce various firing patterns, making it more efficient than fully biophysical approaches.
• Two-compartment models (soma + dendrite)
Divide a neuron into soma and dendrite (or further sections) to study phenomena such as homeostatic plasticity or binaural time difference detection in bipolar cells (Bush and Sejnowski, 1993).
• Simplified dendritic tree models
Keep partial branch structures to model local dendritic processes without modeling every branch.
6.1.3.1 Key features of this group
• Somatodendritic interactions
Allow partial modeling of how dendrites and soma exchange signals, influencing bursts, spikes, or oscillations (Tomko et al., 2021; Bush and Sejnowski, 1993).
• Calcium dynamics (when included)
Enable phenomena like stable firing rate switching or short-term memory without requiring an elaborate multi-compartment tree (Marcucci et al., 2018).
• Scalability and mathematical clarity
Compared to Level I models, these designs often scale better for network studies (e.g., cortical gamma or slow-wave oscillations), and are simpler to analyze for emergent behaviors (Close et al., 2014).
• Task-specific computations
These models can capture how neurons perform behaviorally relevant computations at multiple timescales, linking neural structure to function.
6.1.3.2 Potential applications for brain implants
1. Sensory prostheses
Reduced complexity may support near real-time simulations of hearing or vision pathways, helping design implants that reflect some somatodendritic interactions. Yet they still demand more resources than single-compartment models, which can limit on-chip processing for very large sensory arrays.
2. Adaptive interfaces
By including calcium currents or partial dendritic structures, these models could adjust to patient-specific firing patterns or changes in neural state. However ongoing parameter tuning may require external computation, given implant hardware constraints.
3. Local circuit simulations
Modeling small or medium-sized networks could help predict how groups of neurons respond to stimulation, guiding more targeted interventions (e.g., in motor or cognitive prosthetics). While more feasible than Level I, simulating an entire cortical region in real time may still be beyond typical implant hardware capabilities.
6.1.3.3 Constraints
Though they need fewer resources than fully detailed compartmental models, these approaches still use more computational power than single-compartment designs (Bush and Sejnowski, 1993). Large-scale, real-time simulations in implant devices may prove difficult under such demands. For mid-scale tasks, however, they could deliver sufficient neural detail for certain network studies or adaptive interfaces, without causing excessive computational load.
6.1.4 Single compartmental models (level 3)
Single compartmental models represent each neuron as a point-like unit and do not include the spatial details of dendrites or axons. They focus on how ionic currents govern subthreshold behavior and spike generation in a consolidated way, making them computationally efficient for large-scale or real-time simulations. Despite omitting dendritic or axonal structure, they often provide a useful quantitative look at how key variables—such as membrane voltage and ion channel states—interact to produce neural firing patterns (Koch, 1998).
6.1.4.1 Examples
• Hodgkin–Huxley model
Considered the prototype for Level III. It concentrates on multiple ion channels in a single compartment, explaining subthreshold dynamics and spike initiation without spatial subdivisions (Hodgkin and Huxley, 1952).
• Simple integrate-and-fire (if) model
Focuses on input integration and threshold-based spiking, with minimal parameters.
• Leaky integrate-and-fire (lif) model
Adds a leak term to better reflect real neural membranes.
• Theta neuron model
Uses phase variables to track spiking behavior under minimal assumptions.
• FitzHugh–Nagumo model
Includes simplified equations that approximate the action potential mechanism, often for conceptual or educational use.
• Izhikevich model (single-compartment form)
Although often considered in the “reduced compartmental” category, it can also be implemented in a single-compartment form for certain use cases giving a more realistic representation of passive membrane properties (Izhikevich, 2003).
6.1.4.2 Key features
• Removing spatial structure
All dendrites and axons are lumped into one computational node, focusing on how combined ionic currents drive spikes. This distinguishes these models from Level I and II, which include some morphological detail.
• Quantitative understanding of dynamics
They clarify how membrane voltage, ion channels, and thresholds govern phasic spiking, bursting, or spike-frequency adaptation—often through phase-plane or bifurcation analysis (Rinzel and Ermentrout, 1989).
• Mathematical reductions
Systematic methods reduce or approximate more detailed models (like Hodgkin–Huxley) to an Integrate-and-Fire or resonate-and-fire form, enabling analytical insight (Izhikevich, 2006).
• Stochastic dynamics and noise
Ion channel noise or background synaptic inputs can be included, explaining variations in spike timing and how random fluctuations might affect signal reliability.
6.1.4.3 Potential applications for brain implants
6.1.4.3.1 Basic neural communication
Because these models avoid spatial complexity, they could process large numbers of neurons with minimal computational cost, fitting power-limited implant constraints (Dehghanzadeh et al., 2021). However the absence of dendritic or axonal structure means these models might not reflect certain detailed processes relevant to specific therapies.
6.1.4.3.2 Motor control implants
A lightweight design could be useful for decoding or controlling muscle activation patterns, potentially helping with real-time prosthetic limb control. Nevertheless adaptation or conduction delays that depend on neuron geometry are not represented, so precision in controlling multi-joint movements may be affected.
6.1.4.3.3 Large-scale population simulations
Integrating thousands of these neurons for cortical assemblies or multi-region models is more feasible than with Levels I or II, which could be useful for broad network simulations within implant hardware. However, missing spatial interactions can reduce fidelity when studying phenomena that hinge on local dendritic integration or traveling waves.
6.1.4.3.4 Rapid network responses
Low overhead could support fast feedback loops for closed-loop seizure detection or adaptive deep brain stimulation. However, oversimplified stochastic elements and nonlinearities may affect accuracy in complex pathological conditions.
6.1.4.4 Limitations
Though single-compartmental models are widely studied, researchers occasionally discover unexpected behaviors. For instance, the standard Hodgkin–Huxley formulation might not fully explain every aspect of spike generation, and even slight additions or noise terms can lead to new details about spiking reliability or variability (Fang et al., 2021). They also cannot model dendritic computations like synaptic integration along branching processes, which is still a limitation of their speed and simplicity (Hendrickson et al., 2010; Brette, 2015).
6.1.5 Cascade models (group 4)
Also known as Level IV models in some classifications or neural encoding models for sensory information, cascade models focus on the conceptual side of neural encoding rather than the biophysical mechanisms inside single neurons. They treat sensory processing and other neural computations as a sequence of mathematical operations—often linear filters, nonlinear transformations, and stochastic processes—that transform incoming signals into meaningful output. This stepwise approach is commonly applied to sensory systems (e.g., vision, audition) and is especially valuable for interpreting how neurons handle high-dimensional inputs in a feed-forward manner (Latimer et al., 2019; Meyer et al., 2017).
6.1.5.1 Examples
• Linear cascade models
Often involve simple convolution or filtering steps to represent basic visual or auditory pathways.
• Nonlinear cascade models
Extend linear versions by adding adaptive or more complex transformations, allowing for phenomena like contrast gain control or adaptive coding.
• Linear–Nonlinear–Poisson (LNP) models
Include a spike generation process (Poisson) after a linear filter and a static nonlinearity, modeling how neural firing might depend on filtered stimuli (Zoltowski and Pillow, 2018; Neri, 2015).
• Generalized linear models (GLMs)
Provide a flexible framework to fit input–output relationships from empirical data, incorporating spike history effects or refractoriness.
• Hierarchical Max-Pooling models
Stack multiple filtering and pooling layers, modeling advanced visual processes (e.g., complex cells in the cortex).
6.1.5.2 Key features of this group
• Conceptual level of computation
Emphasize the functional transformations neurons perform on inputs, rather than morphological details or ion channel distributions (Koch, 1998).
• Mathematical primitives
Rely on operations like convolution (linear filters), rectification (nonlinear functions), and random processes (e.g., Poisson spiking) (Moskovitz et al., 2018). For instance, adding a normalization nonlinearity to cascaded linear filters can capture motion processing in visual pathways, illustrating how these transformations go beyond simple linear filtering (Simoncelli and Heeger, 1998).
• Fitting to experimental data
Model parameters are often obtained through regression or maximum likelihood methods (Pillow et al., 2008).
• Applications beyond sensory pathways
Although widely used in vision or audition, they can also address how neurons adapt to different stimulus statistics or encode multiple features (Betzel et al., 2024).
6.1.5.3 Potential applications for brain implants
1. Sensory restoration
Translating sensory signals through cascaded filtering and nonlinearity may let devices (e.g., cochlear or retinal implants) approximate natural coding (Zrenner, 2002; Wilson and Dorman, 2008; Fornos et al., 2019). However, real neural circuits include feedback and context-dependent processing that simple cascades do not capture.
2. Signal processing chains
Each stage can be optimized independently, which might simplify the design of implant firmware that manages noisy or high-dimensional signals. But strongly sequential structures may not adapt well to dynamic conditions (and neuromorphic parallel nature) involving recurrent loops or feedback from other brain areas (Guo et al., 2024; Lebedev and Nicolelis, 2017).
3. Adaptive tuning
Modularity of these models makes it easier to adjust individual layers to reflect patient-specific changes in neural responses. However, if the implant requires fully online adaptation, the computational overhead of re-fitting multiple parameters might be too high for hardware with strict power constraints (Dehghanzadeh et al., 2021; Fisher et al., 2023).
4. Conceptual simplicity
Cascade-based frameworks can run efficiently on neuromorphic chips in feed-forward mode, aligning with the power constraints typical of implantable devices (Dehghanzadeh et al., 2021). Yet they might not handle recurrent or complex feedback-driven behaviors (e.g., certain cognitive tasks) within the same model architecture (Guo et al., 2024).
6.1.5.4 Limitations
Cascade models primarily handle forward-flow transformations, which may be insufficient for neurons deep in sensory pathways or for tasks that involve complex feedback loops. While they are well-suited for discovering or modeling receptive fields and filter stages, they might generalize poorly across diverse stimulus conditions or dynamic contexts. Additional layers (e.g., recurrent or adaptive modules) or advanced model structures may be necessary to handle feedback mechanisms or strong interactions among distant neural populations (Almasi et al., 2022; Zhang et al., 2017).
6.1.6 Black box models (group 5)
Black box models concentrate on system-level input–output relationships, often through probability distributions such as p(Response∣Stimulus). Rather than simulating a neuron’s internal biophysical processes, these frameworks focus on functional accuracy—how well the output matches observed data or desired performance (Saxena et al., 2012; Sheu, 2020). Because they do not require detailed morphological or mechanistic information, they can adapt to various conditions by learning directly from empirical data (Saadatinia and Salimi-Badr, 2024).
6.1.6.1 Examples
• Input–output models
Characterize stimulus–response mappings from recorded data, with no aim to explain the neuron’s internal workings.
• Neural network models
Aim to maximize performance on tasks like classification or regression, sometimes using large training datasets.
• Support vector machines (SVMs)
Use margin-based optimization to separate classes or predict continuous outputs, staying agnostic about neuronal details.
• Gaussian process models
Provide a probabilistic treatment of inputs and outputs, offering uncertainty estimates and flexible nonparametric fitting.
6.1.6.2 Key features of this group
• Neglect of biophysical mechanisms
They intentionally bypass the ion channels, compartments, or morphological aspects of neurons, focusing solely on mapping from inputs to outputs (Karim et al., 2023).
• Data-driven probability distributions
Often define the relationship p(R∣S) between response (R) and stimulus (S), sometimes relying on nonparametric methods that infer distributions directly from measurements (Huang, 2024).
• Adaptability & neural efficiency
Useful for examining how operating points shift when input statistics change, or how the system evolves to maintain robust performance (Karim et al., 2023).
• Functional outcomes
Emphasize results like error rates, decoding accuracy, or predictive power over explanations of how these results emerge biologically. However this is shifting in recent years toward more explainable models.
6.1.6.3 Applications in brain implants
1. Functional goals
When the primary objective is to achieve a specific outcome—for example, schizophrenia diagnosis—black box methods can deliver good performance without detailing internal neuronal mechanisms (Saadatinia and Salimi-Badr, 2024; Highton et al., 2024). However, since these models do not rely on biological details, they may struggle to provide precise or fine-tuned stimulation patterns that align with actual neural processes.
2. Rapid design and high-level control
Quick to train or configure, making them a practical choice for developing algorithms for prosthetics or brain–machine interfaces with minimal assumptions about the neuron-by-neuron architecture (Lebedev and Nicolelis, 2017). But it may require frequent retraining when faced with varied stimuli or changing patient conditions, which can be demanding on implant hardware making it more practical for software use so not on chip computing.
6.1.6.4 Limitations
Neuromorphic hardware often restricts how plasticity is implemented, limiting real-time learning for black box models. Real-time performance in a changing physiological environment demands adaptive algorithms, which can be difficult to support on-chip (Mayr et al., 2016). In practice, it could be more feasible to train networks offline, then deploy fixed weights on custom neuromorphic chips, achieving efficient inference but reducing in-situ adaptability (Merk et al., 2023).
6.1.7 Unifying models: fundamental code unit and brain code
The Fundamental Code Unit (FCU) and Brain Code (BC) frameworks provide a method for connecting low-level biological processes (e.g., ion channels, protein interactions) with high-level cognitive outputs (e.g., language, decision-making). They link detailed and abstract perspectives, describing how neural signals move from molecular chirality in neurotransmitters to fully formed mental activities in human cognition (Howard and Hussain, 2018; Howard et al., 2020; Isik et al., 2024).
6.1.7.1 Key features
1. Higher-Order Abstractions
a. FCU is presented as an abstract code unit that relates basic biophysical events (like spikes or protein-driven signaling) to more advanced mental functions. This approach covers multiple scales, from neurochemical to behavioral.
2. Four Principles of Brain Code
a) Activation Thresholds: Includes phenomena such as action potential thresholds, Weber’s just-noticeable differences, and baseline neural firing cutoffs.
b) Duration of the Signal: Considers how long a signal persists, influencing short-term loops and long-term patterns.
c) Waveforms for Information Transfer: Addresses oscillatory or spike-based signals, including the influence of neurotransmitter chirality.
d) Transduction Between Different Forms of Energy: Focuses on transitions such as ATP/ADP cycles and how chemical gradients become electrical impulses.
3. Brownian Motion and Protein Dynamics
a) Recognizes the stochastic nature of neurotransmission—e.g., in the ubiquitin–proteasome or autophagy–lysosome pathways—and how it affects signal consistency.
4. Cumulative Cognitive Output
a) Treats language and behavior as products of ongoing neural signals, showing how short-term electrical events connect to higher cognitive or emotional states (e.g., isomer-specific mood shifts).
5. Stochastic Neural Signals
a) Notes that spikes, channel gating, and synaptic release have random components, and that the ON/OFF unary math in FCU can incorporate these variations.
6. Relation to Memory and Learning
a) Connects with processes such as LTP (long-term potentiation), LTD (long-term depression), and ATP-related energy usage to account for extended neural plasticity in cognition.
Unary Mathematics
• Uses a ± coding approach to simplify neural signaling into an ON/OFF framework, designed for efficient implementation in computational hardware.
6.1.7.2 Applications for brain implant chips
1. Better signal decoding
By converting noisy or random neural activity into a unary code, implants may decode signals with improved consistency while staying within practical hardware limits.
2. Improved treatment of neurological disorders
Tuning stimulation parameters based on details like molecular chirality or activation thresholds could help devices manage conditions (e.g., Parkinson’s), adjusting activity patterns as the brain state evolves (Howard and Hussain, 2018).
3. Predictive modeling
Incorporating Brownian motion and stochastic synaptic events might allow early detection of cognitive or pathological shifts, enabling preemptive intervention.
4. Integrated data processing
Combining sensor inputs, linguistic data, and behavioral metrics within a unary-coded structure offers a more unified view of a patient’s neural and cognitive status (Howard and Hussain, 2018).
6.1.7.3 Limitations
Some limitations might include that parameters rely on in vitro data, raising questions about how well they generalize to in vivo conditions. Moreover the models need robust methods for extracting relevant neural features and translating them to control mechanisms.
6.1.8 Combining models
While each modeling level has its own strengths—from the spatial detail of Detailed Compartmental Models to the high-level functionality of Black Box approaches—their components could possibly be unified on a single neuromorphic chip to address a plethora of brain implant needs, considering the chip’s power restrictions (Dehghanzadeh et al., 2021; Qi et al., 2023). For real-time loops (e.g., motor control or seizure detection), Single-Compartment Models or Reduced Compartmental Models may handle essential dynamics under strict power and latency constraints, while Cascade Models can structure sensory signal flow in specialized stages (Ramezanian-Panahi et al., 2022). Meanwhile, Fundamental Code Unit (FCU) and Brain Code (BC) frameworks can integrate higher-level or axiological factors for adaptive therapies (e.g., shifting stimulation patterns in response to cognitive changes) (Howard and Hussain, 2018). Where long-term plasticity is required, custom or STDP-based rules could let the system change stimulation over time as a patient’s neural state changes. A combination of parameters from various models is necessary for building effective on-chip computing for the brain implant (Bazzari and Parri, 2019).
One key challenge for brain implants is scaling from micro-level neuronal activity to macro-level outcomes—such as conscious arousal or sleep states—an aspect already explored by FCU and BC. Recent studies show how single-neuron biophysics and large-scale arousal states connect, demonstrating how microscopic spiking or bursting patterns can influence macroscale phenomena like wakefulness or anesthesia (Munn et al., 2023; McGinley et al., 2015). This highlights the need to design computational systems that merge compartmental or spiking neuron models with higher-level frameworks in neuromorphic designs, particularly for brain implants intended for adaptive real-time control across different brain states. In this way, researchers move beyond local neuronal or therapeutic effects to consider how large-scale behaviors might be altered—and the possible risks involved—with more research being conducted (Munn et al., 2023; Singh et al., 2025; Zotey et al., 2023). An alternative approach could be modeling the state of neural disease with a gradual progression toward a healthy model, allowing for controlled healing of neural matter.
Malfunction of these higher-level models (the formal information-processing rules) or the underlying mechanisms (the neural signal transformations themselves) can similarly impact executive cognition, as recent study shows (Barack and Platt, 2016). When models fail to execute correctly, or when neural circuits (e.g., in medial prefrontal cortex) malfunction and misimplement those computations, the result may be dysfunctional behavior or impaired cognition. For brain implants seeking to restore or enhance higher-order functions, it is important to consider not just how a desired model is defined, but also how the actual neuronal mechanisms implement it in real time.
6.2 Challenges of neurocomputational models on neuromorphic hardware
6.2.1 Complexity of neural dynamics simulation
Many neurocomputational models—particularly those simulating detailed neural activity like ion channels or synaptic integration—require significant computational resources (Hodgkin and Huxley, 1952; Koch, 1998). Current neuromorphic hardware might lack the complexity or precision to manage these dynamics at scale, forcing simplifications that reduce fidelity (Schuman et al., 2017; Ward and Rhodes, 2022). This is especially problematic for brain implants, where accurate neural modeling is crucial for addressing pathologies. Different neuron types are determined by gene expression, their specific ion-channel complement, and the polarity of the intra- and extracellular environment connecting all parts of the cell. Ion channels can exhibit diverse timescales, voltage ranges, or states of inactivity, shaping how neurons respond to inputs and external modulation (Koch, 1998).
Designing neuromorphic interfaces that generate adaptable and safe electrical patterns is also challenging. While the brain can adapt to varied signals, correctly reproducing individual cell dynamics and modeling the bioelectrical environment might help lower the risk of immune responses (Lebedev and Nicolelis, 2006; Polikov et al., 2005; Fani et al., 2023). These implants must deliver stimuli with precisely defined amplitude and timing, adjusting to ongoing brain activity. Achieving this depends on understanding how different neural pathways contribute to sensory processing, cognition, and action, then using real-time algorithms to interpret signals and convert them into suitable stimulation protocols (Contreras et al., 2024; Chiappalone et al., 2022).
6.2.2 Large-scale network architectures
The complexity of brain modeling remains high, and our current grasp of neural systems continues to evolve—especially regarding regeneration, adaptation, and large-scale network modeling. Simplified models risk omitting key aspects of neural dynamics, reducing an implant’s adaptability over time as neurobiological knowledge advances (Pathak et al., 2022). Meanwhile, current neuromorphic chips face practical limits on the number of neurons and synapses they can handle due to chip size, power usage, and interconnectivity challenges (Dehghanzadeh et al., 2021; Contreras et al., 2024). Modeling entire cortical areas may require scaling down model complexity or distributing workloads across multiple chips, which introduces latency and synchronization hurdles that can constrain real-time processing in implants (Stanslaski et al., 2012).
6.2.3 Scalability and power efficiency
Perhaps the most critical area of focus is developing neuromorphic chips that remain power-efficient while supporting high-complexity neural computations. This challenge is especially pressing in brain implants that demand battery-free solutions given limited battery life and the shortcomings of wireless power transfer in extending operational longevity (Dehghanzadeh et al., 2021; Miziev et al., 2024). This issue is heightened by the fact that biological brains exhibit great efficiency that neuromorphic systems frequently fail to match at scale, leading to a trade-off between model complexity and battery life in implantable devices if on-device computing must run long-term without frequent recharging. This also raises concerns about long-term stability to maintain steady chip performance and avoid degradation, particularly for patients living with the implant for many years (Dehghanzadeh et al., 2021; Miziev et al., 2024).
6.2.4 Translation of models to hardware and new hardware design
A gap persists between theoretical neurocomputational models and practical neuromorphic chip implementations. Each model’s requirements for precision, timing, and connectivity may not align with architectures typically optimized for generalized SNN operations. Researchers are now exploring designs that more directly mirror biological neural systems—potentially using additional analog components for finer synaptic control, memory technologies that reflect biological processes (Zhang et al., 2020), or even xenobots (Kriegman et al., 2020), anthrobots, robots healing damaged tissues (Hutson, 2023), or other bio-hybrid solutions (Boulingre et al., 2023; Rochford et al., 2019) as technology advances. Some are also investigating mind-simulation/uploading, where a patient’s brain state could be modeled online, possibly modeling regeneration progression and adapting implants accordingly, such as in epilepsy (Jirsa et al., 2016; Watanabe, 2023). In many cases, custom-made chip designs—possibly unique to each patient (Singh et al., 2025)—might prove necessary, making off-the-shelf solutions insufficient and pushing up development costs for brain implants.
6.2.5 Need for a new complete neuromorphic pipeline
Developing a complete sensor–processor–stimulator pipeline is key for neuromorphic brain implants, as shown in Figure 2. Each component must be built specifically for neural use with possibly biocompatible materials and signal processing (Valle et al., 2024): the sensor interprets neural signals, the processor decodes and processes them in real time, and the stimulator generates precise patterns to modulate neural circuits based on the processed information (Stanslaski et al., 2012; Chiappalone et al., 2022). Neuromorphic bio-signal interfaces enhance this pipeline by enabling efficient EEG and ECG processing for real-time applications, such as decoding brain states or monitoring cardiac activity, though integrating these capabilities into a compact, low-power system is still a significant challenge (Bauer et al., 2019; Sharifshazileh et al., 2021; Li et al., 2024a,b).

Figure 2. An example of a neuromorphic pipeline for brain implants, starting with neural recording and progressing through event-driven processing, spiking neural network computation, and adaptive neuromodulation for closed-loop control.
7 Conclusion and next-generation brain implants
Addressing these issues requires the development of high-performance, low-power neural signal processing algorithms with adequate compression capabilities, though this often involves navigating trade-offs between chip size and functionality.
7.1 Memory storage
Memory storage in neuromorphic hardware may someday replicate the brain’s own mechanisms for encoding, storing, and retrieving information, possibly through advanced materials, such as neuromorphic optical data storage enabled by nanophotonics (Lamon et al., 2024; Tait et al., 2019), FLASH memory, other NVM technologies (Mehonic et al., 2024), or adaptable wire Creating truly effective neuromorphic systems requires a “brain code” that captures key biochemical and electrical dynamics while adapting to neural activity and plasticity (Valle et al., 2024; Chiappalone et al., 2022). Research on cellular intelligence suggests this code must surpass traditional neural models, incorporating bioelectrical signals that act as “software” guiding cellular behavior and even large-scale anatomical outcomes such as organ formation (Levin, 2021). Such competencies and collective goals create a multi-scale architecture that implants must interpret and modulate. Yet even advanced neuromorphic technologies only partially capture these emergent properties—self-organization and goal-directed behavior. Success will hinge on fidelity and adaptability: model choices must respect synaptic plasticity, network connectivity, and biochemical responsiveness without disrupting the brain’s natural processes (Contreras et al., 2024; Rommelfanger et al., 2021).
7.2 Single spatial resolution and adverse effect
Implementing neuron-to-neuron modeling rather than stimulating a broader area may be important for achieving better therapeutic outcomes (Zhang et al., 2023; Topalovic et al., 2023; Campbell et al., 2024). ETH Zurich researchers have found that neuroprosthetics work better when using signals inspired by nature, proving superior to time-constant stimulation in the case of leg amputees while being less demanding for the brain (Valle et al., 2024).
Although it may seem conceivable that the brain could adapt to any kind of electric signal or pattern, cells react to mechanical properties in their surroundings—including matrix stiffness and external forces—and targeting entire clusters instead of a single dysfunctional cell can trigger unwanted reactions (Kumosa, 2023; Su et al., 2024; Davidson et al., 2021; Ladoux and Mège, 2017; Alberts et al., 2002). Broad stimulation risks overlooking the collective intelligence of cellular networks, where cells communicate via bioelectrical and molecular signals to achieve specific goals (Levin, 2021). Disrupting this collective behavior could amplify adverse effects by ignoring the emergent dynamics that govern tissue integrity and function. Interconnected cells influence each other’s behavior, including through extracellular and systemic signaling. As a result, stimulating an entire group can amplify the release of harmful molecules from dysfunctional cells—compromising tissue integrity—or trigger systemic immune responses, potentially leading to inflammation or hypersensitivity disorders. It may also cause therapeutic effects to lose selectivity or become counterproductive if normal cells are affected, but more research is required to further confirm this (Su et al., 2024; Hegade and Rashid, 2024; Kubelt et al., 2021; Alberts et al., 2002). Moreover, the potential impact of the cerebral vascular network when designing and implanting devices should be taken into account—not only during the implant’s design but also as a potential source of noise or fluctuation in signal recording (Kozai et al., 2014).
1. Neighboring cell effects
Neighboring cells often evaluate each other’s fitness through mechanisms like “fitness fingerprints,” where less fit cells are targeted for elimination by their healthier neighbors. Stimulating all cells in a group could disrupt this balance and potentially enhance the survival of pathological cells or suppress the natural elimination of damaged ones (Madan et al., 2019; Colom et al., 2020; Rao et al., 2012).
2. Release of toxic substances
When dysfunctional cells are stimulated, they may release harmful substances such as reactive oxygen species or inflammatory signals, which can damage surrounding normal cells. For example, dying neurons release neurotoxic factors that harm nearby neurons, and preventing this through targeted interventions can potentially protect the group (Rao et al., 2012; Lee et al., 2010; Block and Hong, 2005).
3. Immune system activation
Broad stimulation may inadvertently activate immune responses, such as T-cell hypersensitivity reactions. These reactions can lead to systemic effects like inflammation or tissue damage, as seen in drug-induced hypersensitivity reactions mediated by off-target immune receptor interactions (Pichler et al., 2015; Adam et al., 2010; Wuillemin et al., 2022).
4. Off-target effects
Stimulating a group of cells could result in unintended activation of nearby normal cells, leading to off-target effects. For instance, with the simultaneous intake of drugs, interactions with non-target receptors or proteins can cause unpredictable side effects, including immune-mediated adverse reactions (Adam et al., 2010; Wuillemin et al., 2022).
Future advancements must decode cellular communication—understanding collective goals and incorporating multi-scale, non-invasive approaches to achieve full integration with the brain’s complex intelligence and architecture. This shift, potentially through biohybrid solutions or enhanced neuromorphic models, requires further studies to bridge the gap between current technology and the brain’s capabilities. More research needs to be done to study the effects of targeted neuromorphic stimulation and potential adversarial effects of large surface stimulation, particularly in light of cellular communication and collective biological intelligence for next-generation implants.
Author contributions
WP: Writing – original draft, Writing – review & editing. NH: Funding acquisition, Supervision, Writing – review & editing.
Funding
The author(s) declare that no financial support was received for the research and/or publication of this article.
Conflict of interest
Authors WP and NH were employed by company ni2o.
Generative AI statement
The authors declare that no Gen AI was used in the creation of this manuscript.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Aboumerhi, K., Güemes, A., Liu, H., Tenore, F., and Etienne-Cummings, R. (2023). Neuromorphic applications in medicine. J. Neural Eng. 20:041004. doi: 10.1088/1741-2552/aceca3
Adam, J., Pichler, W. J., and Yerly, D. (2010). Delayed drug hypersensitivity: models of T-cell stimulation. Br. J. Clin. Pharmacol. 71, 701–707. doi: 10.1111/j.1365-2125.2010.03764.x
Ahmadi, A., Gomar, S., and Ahmadi, M. (2024). Editorial: from theory to practice: the latest developments in neuromorphic computing applications. Front. Neurosci. 18:1511987. doi: 10.3389/fnins.2024.1511987
Alberts, B., et al. (2002). Extracellular control of cell division, cell growth, and apoptosis. Available online at: https://www.ncbi.nlm.nih.gov/books/NBK26877/.
Aldag, N., and Nogueira, W. (2024). Psychoacoustic and electroencephalographic responses to changes in amplitude modulation depth and frequency in relation to speech recognition in cochlear implantees. Sci. Rep. 14:8181. doi: 10.1038/s41598-024-58225-1
Aleixo, E. L., Colonna, J. G., Cristo, M., and Fernandes, E. (2024). Catastrophic forgetting in deep learning: a comprehensive taxonomy. J. Braz. Comput. Soc. 30, 26–40. doi: 10.5753/jbcs.2024.3966
Allen, N. J., and Barres, B. A. (2009). Glia—more than just brain glue. Nature 457, 675–677. doi: 10.1038/457675a
Almasi, A., Sun, S. H., Yunzab, M., Jung, Y. J., Meffin, H., and Ibbotson, M. R. (2022). How stimulus statistics affect the receptive fields of cells in primary visual cortex. J. Neurosci. 42, 5198–5211. doi: 10.1523/jneurosci.0664-21.2022
Alsakkal, M. A., and Wijekoon, J. (2025). Spiketrum: an FPGA-based implementation of a neuromorphic cochlea. IEEE Trans. Circ. Syst. I Regul. Pap., 72:1–9. doi: 10.1109/tcsi.2025.3526585
Amsalem, O., Eyal, G., Rogozinski, N., Gevaert, M., Kumbhar, P., Schürmann, F., et al. (2020). An efficient analytical reduction of detailed nonlinear neuron models. Nat. Commun. 11, 1–5. doi: 10.1038/s41467-019-13932-6
Baker, K. B., Plow, E. B., Nagel, S., Rosenfeldt, A. B., Gopalakrishnan, R., Clark, C., et al. (2023). Cerebellar deep brain stimulation for chronic post-stroke motor rehabilitation: a phase I trial. Nat. Med. 29, 2366–2374. doi: 10.1038/s41591-023-02507-0
Barack, D. L., and Platt, M. L. (2016). Neurocomputational nosology: malfunctions of models and mechanisms. Front. Psychol. 7:602. doi: 10.3389/fpsyg.2016.00602
Bauer, F. C., Lenz, G., Haghighatshoar, S., and Sheik, S. (2023). EXODUS: stable and efficient training of spiking neural networks. Front. Neurosci. 17:1110444. doi: 10.3389/fnins.2023.1110444
Bauer, F. C., Muir, D. R., and Indiveri, G. (2019). Real-time ultra-low power ECG anomaly detection using an event-driven neuromorphic processor. IEEE Trans. Biomed. Circ. Syst. 13, 1575–1582. doi: 10.1109/tbcas.2019.2953001
Bazzari, A., and Parri, H. (2019). Neuromodulators and long-term synaptic plasticity in learning and memory: a steered-glutamatergic perspective. Brain Sci. 9:300. doi: 10.3390/brainsci9110300
Benabid, A. L. (2003). Deep brain stimulation for Parkinson’s disease. Curr. Opin. Neurobiol. 13, 696–706. doi: 10.1016/j.conb.2003.11.001
Benjamin, B. V., Gao, P., McQuinn, E., Choudhary, S., Chandrasekaran, A. R., Bussat, J. M., et al. (2014). Neurogrid: a mixed-analog-digital multichip system for large-scale neural simulations. Proc. IEEE 102, 699–716. doi: 10.1109/jproc.2014.2313565
Ben-Shalom, R., Ladd, A., Artherya, N. S., Cross, C., Kim, K. G., Sanghevi, H., et al. (2021). NeuroGPU: accelerating multi-compartment, biophysically detailed neuron simulations on GPUs. J. Neurosci. Methods 366:109400. doi: 10.1016/j.jneumeth.2021.109400
Betzel, R., et al. (2024). Parallel and converging multisensory cascades in the Drosophila connectome. bioRxiv. doi: 10.1101/2024.12.04.624189
Bill, J., and Legenstein, R. (2014). A compound memristive synapse model for statistical learning through STDP in spiking neural networks. Front. Neurosci. 8:412. doi: 10.3389/fnins.2014.00412
Bingham, C. S., Loizos, K., Yu, G. J., Gilbert, A., Bouteiller, J. M. C., Song, D., et al. (2018). Model-based analysis of electrode placement and pulse amplitude for hippocampal stimulation. IEEE Trans. Biomed. Eng. 65, 2278–2289. doi: 10.1109/tbme.2018.2791860
Bird, G., and Polivoda, M. E. (2021). Backpropagation through time for networks with long-term dependencies. arXiv. doi: 10.48550/arxiv.2103.15589
Block, M. L., and Hong, J.-S. (2005). Microglia and inflammation-mediated neurodegeneration: multiple triggers with a common mechanism. Prog. Neurobiol. 76, 77–98. doi: 10.1016/j.pneurobio.2005.06.004
Bohnstingl, T., et al. (2022) Biologically-inspired training of spiking recurrent neural networks with neuromorphic hardware. 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (IEEE: AICAS), pp. 218–221.
Bohte, S. M., Kok, J. N., and La Poutré, J. A. (2000) SpikeProp: backpropagation for networks of spiking neurons. The European Symposium on Artificial Neural Networks, 419–424. Available online at: http://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2000-22.pdf (Accessed January 3, 2025).
Bos, H., and Muir, D. (2022). Sub-mW neuromorphic SNN audio processing applications with Rockpool and Xylo. arXiv. doi: 10.48550/arxiv.2208.12991
Bos, H., and Muir, D. R. (2024) Micro-power spoken keyword spotting on Xylo audio 2. Available online at: https://arxiv.org/abs/2406.15112.
Boulingre, M., Portillo-Lara, R., and Green, R. A. (2023). Biohybrid neural interfaces: improving the biological integration of neural implants. Chem. Commun. 59, 14745–14758. doi: 10.1039/d3cc05006h
Bouvier, M., Valentian, A., Mesquida, T., Rummens, F., Reyboz, M., Vianello, E., et al. (2019). Spiking neural networks hardware implementations and challenges. ACM J. Emerg. Technol. Comput. Syst. 15, 1–35. doi: 10.1145/3304103
Boybat, I., le Gallo, M., Nandakumar, S. R., Moraitis, T., Parnell, T., Tuma, T., et al. (2018). Neuromorphic computing with multi-memristive synapses. Nat. Commun. 9, 1–6. doi: 10.1038/s41467-018-04933-y
Boys, A. J., Carnicer-Lombarte, A., Güemes-Gonzalez, A., van Niekerk, D., Hilton, S., Barone, D. G., et al. (2022). 3D bioelectronics with a Remodellable matrix for long-term tissue integration and recording. Adv. Mater. 35:e2207847. doi: 10.1002/adma.202207847
Brette, R. (2015). What is the most realistic single-compartment model of spike initiation? PLoS Comput. Biol. 11:e1004114. doi: 10.1371/journal.pcbi.1004114
Büchel, J., Faber, F. F., and Muir, D. R. (2022b) Network insensitivity to parameter noise via parameter attack during training. ICLR 2022 conference program chairs. Available online at: https://openreview.net/forum?id=-8sBpe7rDiV.
Büchel, J., Lenz, G., Hu, Y., Sheik, S., and Sorbaro, M. (2022a). Adversarial attacks on spiking convolutional neural networks for event-based vision. Front. Neurosci. 16:1068193. doi: 10.3389/fnins.2022.1068193
Burr, G. W., Shelby, R. M., Sebastian, A., Kim, S., Kim, S., Sidler, S., et al. (2016). Neuromorphic computing using non-volatile memory. Adv. Phys. X 2, 89–124. doi: 10.1080/23746149.2016.1259585
Bush, P. C., and Sejnowski, T. J. (1993). Reduced compartmental models of neocortical pyramidal cells. J. Neurosci. Methods 46, 159–166. doi: 10.1016/0165-0270(93)90151-g
Caiafa, C. F., Sun, Z., Tanaka, T., Marti-Puig, P., and Solé-Casals, J. (2021). Machine learning methods with Noisy, incomplete or small datasets. Appl. Sci. 11:4132. doi: 10.3390/app11094132
Campbell, J. M., Cowan, R. L., Wahlstrom, K. L., Hollearn, M. K., Jensen, D., Davis, T., et al. (2024). Human single-neuron activity is modulated by intracranial theta burst stimulation of the basolateral amygdala. bioRxiv. doi: 10.1101/2024.11.11.622161
Cariani, P. A. (2001). Temporal coding of sensory information in the brain. Acoust. Sci. Technol. 22, 77–84. doi: 10.1250/ast.22.77
Chapin, J. K. (2004). Using multi-neuron population recordings for neural prosthetics. Nat. Neurosci. 7, 452–455. doi: 10.1038/nn1234
Chapin, J. K., Moxon, K. A., Markowitz, R. S., and Nicolelis, M. A. L. (1999). Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex. Nat. Neurosci. 2, 664–670. doi: 10.1038/10223
Chauhan, K., Khaledi-Nasab, A., Neiman, A. B., and Tass, P. A. (2022). Dynamics of phase oscillator networks with synaptic weight and structural plasticity. Sci. Rep. 12:15003. doi: 10.1038/s41598-022-19417-9
Chiappalone, M., Cota, V. R., Carè, M., di Florio, M., Beaubois, R., Buccelli, S., et al. (2022). Neuromorphic-based neuroprostheses for brain rewiring: state-of-the-art and perspectives in neuroengineering. Brain Sci. 12:1578. doi: 10.3390/brainsci12111578
Christensen, D. V., Dittmann, R., Linares-Barranco, B., Sebastian, A., le Gallo, M., Redaelli, A., et al. (2022). 2022 roadmap on neuromorphic computing and engineering. Neuromor. Comput. Eng. 2:022501. doi: 10.1088/2634-4386/ac4a83
Close, T. G., Torben-Nielsen, B., and De Schutter, E. (2014). Reduction of multi-compartmental biophysical models by incremental, automated retuning of their parameters and synaptic weights. BMC Neurosci. 15:P178. doi: 10.1186/1471-2202-15-s1-p178
Colom, B., Alcolea, M. P., Piedrafita, G., Hall, M. W. J., Wabik, A., Dentro, S. C., et al. (2020). Spatial competition shapes the dynamic mutational landscape of normal esophageal epithelium. Nat. Genet. 52, 604–614. doi: 10.1038/s41588-020-0624-3
Contreras, L. F. H., et al. (2023) Neuromorphic neuromodulation: towards the next generation of on-device AI-revolution in electroceuticals. Available online at: https://arxiv.org/abs/2307.12471.
Contreras, L. F. H., et al. (2024). Neuromorphic neuromodulation: towards the next generation of closed-loop neurostimulation. PNAS Nexus 3, 7–10. doi: 10.1093/pnasnexus/pgae488
Das, A., et al. (2024). Brain-state mediated modulation of inter-laminar dependencies in visual cortex. Nat. Commun. 15, 2–5. doi: 10.1038/s41467-024-49144-w
Davidson, J. D., Vishwakarma, M., and Smith, M. L. (2021). Hierarchical approach for comparing collective behavior across scales: cellular systems to honey bee colonies. Front. Ecol. Evol. 9:581222. doi: 10.3389/fevo.2021.581222
Davies, M., Srinivasa, N., Lin, T. H., Chinya, G., Cao, Y., Choday, S. H., et al. (2018). Loihi: a neuromorphic Manycore processor with on-Chip learning. IEEE Micro 38, 82–99. doi: 10.1109/mm.2018.112130359
Davison, A. P., and Appukuttan, S. (2022). A faster way to model neuronal circuitry. eLife 11, 1–2. doi: 10.7554/elife.84463
De Schutter, E., and Bower, J. M. (1994). An active membrane model of the cerebellar Purkinje cell. I. Simulation of current clamps in slice. J. Neurophysiol. 71, 375–400. doi: 10.1152/jn.1994.71.1.375
Dehghanzadeh, P., Zamani, H., and Mandal, S. (2021). Fundamental trade-offs between power and data transfer in inductive links for biomedical implants. IEEE Trans. Biomed. Circ. Syst. 15, 235–247. doi: 10.1109/tbcas.2021.3063376
Destexhe, A., Contreras, D., and Steriade, M. (1998). Mechanisms underlying the synchronizing action of corticothalamic feedback through inhibition of thalamic relay cells. J. Neurophysiol. 79, 999–1016. doi: 10.1152/jn.1998.79.2.999
DeWolf, T., Stewart, T. C., Slotine, J. J., and Eliasmith, C. (2016). A spiking neural model of adaptive arm control. Proc. R. Soc. B Biol. Sci. 283:20162134. doi: 10.1098/rspb.2016.2134
Ding, C. (2024). Neurostimulation and wireless data transmission circuits for implantable high-density neural interfaces. EPFL. doi: 10.5075/epfl-thesis-11270
Donati, E., Payvand, M., Risi, N., Krause, R., and Indiveri, G. (2019). Discrimination of EMG signals using a neuromorphic implementation of a spiking neural network. IEEE Trans. Biomed. Circ. Syst. 13, 795–803. doi: 10.1109/tbcas.2019.2925454
Donati, E., and Valle, G. (2024). Neuromorphic hardware for somatosensory neuroprostheses. Nat. Commun. 15, 2–6. doi: 10.1038/s41467-024-44723-3
Du, N., et al. (2021). Synaptic plasticity in memristive artificial synapses and their robustness against noisy inputs. Front. Neurosci. 15:660894. doi: 10.3389/fnins.2021.660894
Duan, X., Cao, Z., Gao, K., Yan, W., Sun, S., Zhou, G., et al. (2024). Memristor-based neuromorphic chips. Adv. Mater. 36:e2310704. doi: 10.1002/adma.202310704
Eshraghian, J. K., Ward, M., Neftci, E. O., Wang, X., Lenz, G., Dwivedi, G., et al. (2023). Training spiking neural networks using lessons from deep learning. Proc. IEEE 111, 1016–1054. doi: 10.1109/jproc.2023.3308088
Fang, X., Duan, S., and Wang, L. (2021). Memristive Hodgkin-Huxley spiking neuron model for reproducing neuron behaviors. Front. Neurosci. 15:730566. doi: 10.3389/fnins.2021.730566
Fani, N., et al. (2023). Current advances in wound healing and regenerative medicine. Curr. Stem Cell Res. Ther. 19, 277–291. doi: 10.2174/1574888x18666230301140659
Fernandez, E. (2018). Development of visual Neuroprostheses: trends and challenges. Bioelectr. Med. 4:12. doi: 10.1186/s42234-018-0013-8
Ferré, P., Mamalet, F., and Thorpe, S. J. (2018). Unsupervised feature learning with winner-takes-all based STDP. Front. Comput. Neurosci. 12:24. doi: 10.3389/fncom.2018.00024
Fisher, L. E., Gaunt, R. A., and Huang, H. (2023). Sensory restoration for improved motor control of prostheses. Curr. Opin. Biomed. Eng. 28:100498. doi: 10.1016/j.cobme.2023.100498
Fornos, A. P., et al. (2019). Designing artificial senses: steps from physiology to clinical implementation. Schweiz. Med. Wochenschr. 149:w20061. doi: 10.4414/smw.2019.20061
Franze, K. (2013). The mechanical control of nervous system development. Development 140, 3069–3077. doi: 10.1242/dev.079145
Friedrich, P., Gulyás, A. I., Freund, T. F., and Káli, S. (2013). A flexible software tool for fitting the parameters of neuronal models. BMC Neurosci. 14:63. doi: 10.1186/1471-2202-14-s1-p202
Fu, T.-M., Hong, G., Zhou, T., Schuhmann, T. G., Viveros, R. D., and Lieber, C. M. (2016). Stable long-term chronic brain mapping at the single-neuron level. Nat. Methods 13, 875–882. doi: 10.1038/nmeth.3969
Furber, S. (2016). Large-scale neuromorphic computing systems. J. Neural Eng. 13:051001. doi: 10.1088/1741-2560/13/5/051001
Furber, S. B., Lester, D. R., Plana, L. A., Garside, J. D., Painkras, E., Temple, S., et al. (2012). Overview of the SpiNNaker system architecture. IEEE Trans. Comput. 62, 2454–2467. doi: 10.1109/tc.2012.142
Gerstner, W. (1995). Time structure of the activity in neural network models. Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics 51, 738–758. doi: 10.1103/physreve.51.738
Gilja, V., Nuyujukian, P., Chestek, C. A., Cunningham, J. P., Yu, B. M., Fan, J. M., et al. (2012). A high-performance neural prosthesis enabled by control algorithm design. Nat. Neurosci. 15, 1752–1757. doi: 10.1038/nn.3265
Gonzalez, H. A., et al. (2024). SPINNAKeR2: a large-scale neuromorphic system for event-based and asynchronous machine learning. arXiv. doi: 10.48550/arxiv.2401.04491
Gulino, M., Kim, D., Pané, S., Santos, S. D., and Pêgo, A. P. (2019). Tissue response to neural implants: the use of model systems toward new design solutions of implantable microelectrodes. Front. Neurosci. 13:689. doi: 10.3389/fnins.2019.00689
Guo, W., Fouda, M. E., Eltawil, A. M., and Salama, K. N. (2021). Neural coding in spiking neural networks: a comparative study for robust neuromorphic systems. Front. Neurosci. 15:638474. doi: 10.3389/fnins.2021.638474
Guo, L., Weiße, A., Zeinolabedin, S. M. A., Schüffny, F. M., Stolba, M., Ma, Q., et al. (2024). 68-channel neural signal processing system-on-chip with integrated feature extraction, compression, and hardware accelerators for neuroprosthetics in 22 nm FDSOI. Front. Neurosci. 18:1432750. doi: 10.3389/fnins.2024.1432750
Guo, Y., et al. (2022) 'RECDIS-SNN: rectifying membrane potential distribution for directly training spiking neural networks, ' 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE: CVPR), pp. 326–335.
Gupta, P., Balasubramaniam, N., Chang, H. Y., Tseng, F. G., and Santra, T. S. (2020). A single-neuron: current trends and future prospects. Cells 9:1528. doi: 10.3390/cells9061528
Gupta, A., Vardalakis, N., and Wagner, F. B. (2023). Neuroprosthetics: from sensorimotor to cognitive disorders. Commun. Biol. 6:14. doi: 10.1038/s42003-022-04390-w
Gygax, J., and Zenke, F. (2024). Elucidating the theoretical underpinnings of surrogate gradient learning in spiking neural networks. arXiv. doi: 10.48550/arxiv.2404.14964
Hall, L. T., Beart, G. C. G., Thomas, E. A., Simpson, D. A., McGuinness, L. P., Cole, J. H., et al. (2012). High spatial and temporal resolution wide-field imaging of neuron activity using quantum NV-diamond. Sci. Rep. 2:401. doi: 10.1038/srep00401
Hameroff, S., and Penrose, R. (2013). Consciousness in the universe. Phys Life Rev 11, 39–78. doi: 10.1016/j.plrev.2013.08.002
Harrison, R. R. (2008). The design of integrated circuits to observe brain activity. Proc. IEEE 96, 1203–1216. doi: 10.1109/jproc.2008.922581
Hartshorn, A., and Jobst, B. (2018). Responsive brain stimulation in epilepsy. Ther. Adv. Chron, Dis. 9, 135–142. doi: 10.1177/2040622318774173
Hassan, A. M., Li, H. H., and Chen, Y. (2017) Hardware implementation of echo state networks using memristor double crossbar arrays,' 2022 International Joint Conference on Neural Networks (IEEE: IJCNN), pp. 2171–2177.
He, F., Lycke, R., Ganji, M., Xie, C., and Luan, L. (2020). Ultraflexible neural electrodes for long-lasting intracortical recording. iScience 23:101387. doi: 10.1016/j.isci.2020.101387
Hebb, D. O. (1988) The organization of behavior, MIT Press, 43–54. Available online at: http://dl.acm.org/citation.cfm?id=104380.
Hegade, A., and Rashid, M. (2024). Operating principles of interconnected feedback loops underlying cell fate decisions. bioRxiv. doi: 10.1101/2024.05.24.595855
Hendrickson, E. B., Edgerton, J. R., and Jaeger, D. (2010). The capabilities and limitations of conductance-based compartmental neuron models with reduced branched or unbranched morphologies and active dendrites. J. Comput. Neurosci. 30, 301–321. doi: 10.1007/s10827-010-0258-z
Hennessy, J. L., and Patterson, D. A. (2011) Computer architecture, fifth edition: a quantitative approach, Morgan Kaufmann Publishers Inc., p. 880. Available online at: https://dl.acm.org/citation.cfm?id=3207796.
Herz, A. V. M., Gollisch, T., Machens, C. K., and Jaeger, D. (2006). Modeling single-neuron dynamics and computations: a balance of detail and abstraction. Science 314, 80–85. doi: 10.1126/science.1127240
Highton, J., et al. (2024). Robustness testing of black-box models against CT degradation through test-time augmentation. arXiv. doi: 10.48550/arxiv.2406.19557
Hochberg, L. R., Bacher, D., Jarosiewicz, B., Masse, N. Y., Simeral, J. D., Vogel, J., et al. (2012). Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 485, 372–375. doi: 10.1038/nature11076
Hodgkin, A. L., and Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500–544. doi: 10.1113/jphysiol.1952.sp004764
Horowitz, M. (2014) ' 1.1 Computing’s energy problem (and what we can do about it),' 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE: ISSCC).
Howard, N., Chouikhi, N., Adeel, A., Dial, K., Howard, A., and Hussain, A. (2020). BrainOS: a novel artificial brain-alike automatic machine learning framework. Front. Comput. Neurosci. 14:16. doi: 10.3389/fncom.2020.00016
Howard, N., and Hussain, A. (2018). The fundamental code unit of the brain: towards a new model for cognitive geometry. Cogn. Comput. 10, 426–436. doi: 10.1007/s12559-017-9538-5
Huang, H. (2024). One-step data-driven generative model via Schrödinger Bridge. arXiv. doi: 10.48550/arxiv.2405.12453
Huang, Y., Kiani, F., Ye, F., and Xia, Q. (2023). From memristive devices to neuromorphic systems. Appl. Phys. Lett. 122, 1–2. doi: 10.1063/5.0133044
Hussain, M. A., Grill, W. M., and Pelot, N. A. (2024). Highly efficient modeling and optimization of neural fiber responses to electrical stimulation. Nat. Commun. 15, 4–5. doi: 10.1038/s41467-024-51709-8
Hutson, M. (2023). Tiny robots made from human cells heal damaged tissue. Nature. doi: 10.1038/d41586-023-03777-x
Hwang, G.-T., Kim, Y., Lee, J. H., Oh, S. K., Jeong, C. K., Park, D. Y., et al. (2015). Self-powered deep brain stimulation via a flexible PIMNT energy harvester. Energy Environ. Sci. 8, 2677–2684. doi: 10.1039/c5ee01593f
Hylton, T. (2007) Systems of neuromorphic adaptive plastic scalable electronics. Proceeding of DARPA SYNAPSE Bidder’s Workshop Teaming Meeting.
Indiveri, G., Chicca, E., and Douglas, R. J. (2009). Artificial cognitive systems: from VLSI networks of spiking neurons to neuromorphic cognition. Cogn. Comput. 1, 119–127. doi: 10.1007/s12559-008-9003-6
Indiveri, G., Linares-Barranco, B., Hamilton, T. J., Schaik, A., Etienne-Cummings, R., Delbruck, T., et al. (2011). Neuromorphic silicon neuron circuits. Front. Neurosci. 5:73. doi: 10.3389/fnins.2011.00073
Indiveri, G., and Liu, S.-C. (2015). Memory and information processing in neuromorphic systems. Proc. IEEE 103, 1379–1397. doi: 10.1109/jproc.2015.2444094
Isik, M., et al. (2024) Advancing neuromorphic computing: mixed-signal design techniques leveraging brain code units and fundamental code units. 2022 International Joint Conference on Neural Networks (IJCNN), 345, pp. 1–8.
Izhikevich, E. M. (2003). Simple model of spiking neurons. IEEE Trans. Neural Netw. 14, 1569–1572. doi: 10.1109/tnn.2003.820440
Javanshir, A., Nguyen, T. T., Mahmud, M. A. P., and Kouzani, A. Z. (2022). Advancements in algorithms and neuromorphic hardware for spiking neural networks. Neural Comput. 34, 1289–1328. doi: 10.1162/neco_a_01499
Ji, M., Wang, Z., Yan, R., Liu, Q., Xu, S., and Tang, H. (2023). SCTN: event-based object tracking with energy-efficient deep convolutional spiking neural networks. Front. Neurosci. 17:1123698. doi: 10.3389/fnins.2023.1123698
Jiang, H.-J., et al. (2024). Modeling neuron-astrocyte interactions in neural networks using distributed simulation. bioRxiv. doi: 10.1101/2024.11.11.622953
Jirsa, V. K., Proix, T., Perdikis, D., Woodman, M. M., Wang, H., Gonzalez-Martinez, J., et al. (2016). The virtual epileptic patient: individualized whole-brain models of epilepsy spread. Neuroimage 145, 377–388. doi: 10.1016/j.neuroimage.2016.04.049
Kandel, E. R. (1976) Cellular basis of behavior: an introduction to behavioral neurobiology. W.H. Freeman Available online at: http://ci.nii.ac.jp/ncid/BA01136690.
Kandel, E. R., and Spencer, W. A. (1968). Cellular neurophysiological approaches in the study of learning. Physiol. Rev. 48, 65–134. doi: 10.1152/physrev.1968.48.1.65
Karim, M. D. R., et al. (2023) ' Interpreting black-box machine learning models for high dimensional datasets,' 2022 IEEE 9th International Conference on Data Science and Advanced Analytics (IEEE: DSAA), pp. 1–10.
Kastellakis, G., Cai, D. J., Mednick, S. C., Silva, A. J., and Poirazi, P. (2015). Synaptic clustering within dendrites: an emerging theory of memory formation. Prog. Neurobiol. 126, 19–35. doi: 10.1016/j.pneurobio.2014.12.002
Kheradpisheh, S. R., Ganjtabesh, M., Thorpe, S. J., and Masquelier, T. (2017). STDP-based spiking deep convolutional neural networks for object recognition. Neural Netw. 99, 56–67. doi: 10.1016/j.neunet.2017.12.005
Kim, Y., Li, Y., Moitra, A., Yin, R., and Panda, P. (2023). Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks. Front. Neurosci. 17:1230002. doi: 10.3389/fnins.2023.1230002
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., et al. (2017). Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci USA. 114, 3521–3526. doi: 10.1073/pnas.1611835114
Kozai, T. D. Y., et al. (2014). Brain tissue responses to neural implants impact signal sensitivity and intervention strategies. ACS Chem. Neurosci. 6, 48–67. doi: 10.1021/cn500256e
Krauhausen, I., Griggs, S., McCulloch, I., den Toonder, J., Gkoupidenis, P., and van de Burgt, Y. (2024). Bio-inspired multimodal learning with organic neuromorphic electronics for behavioral conditioning in robotics. Nat. Commun. 15, 4–7. doi: 10.1038/s41467-024-48881-2
Kriegman, S., Blackiston, D., Levin, M., and Bongard, J. (2020). A scalable pipeline for designing reconfigurable organisms. Proc. Natl. Acad. Sci. 117, 1853–1859. doi: 10.1073/pnas.1910837117
Kubelt, C., Molkewehrum, H., Lucius, R., Synowitz, M., Held-Feindt, J., and Helmers, A. K. (2021). Influence of simulated deep brain stimulation on the expression of inflammatory mediators by human central nervous system cells in vitro. NeuroMolecular Med. 24, 169–182. doi: 10.1007/s12017-021-08674-y
Kudithipudi, D., Schuman, C., Vineyard, C. M., Pandit, T., Merkel, C., Kubendran, R., et al. (2025). Neuromorphic computing at scale. Nature 637, 801–812. doi: 10.1038/s41586-024-08253-8
Kumosa, L. S. (2023). Commonly overlooked factors in biocompatibility studies of neural implants. Adv. Sci. 10, 3–4. doi: 10.1002/advs.202205095
Ladoux, B., and Mège, R.-M. (2017). Mechanobiology of collective cell behaviours. Nat. Rev. Mol. Cell Biol. 18, 743–757. doi: 10.1038/nrm.2017.98
Lambert, N., Chen, Y. N., Cheng, Y. C., Li, C. M., Chen, G. Y., and Nori, F. (2012). Quantum biology. Nat. Phys. 9, 10–18. doi: 10.1038/nphys2474
Lamon, S., Zhang, Q., Yu, H., and Gu, M. (2024). Neuromorphic optical data storage enabled by nanophotonics: a perspective. ACS Photon. 11, 874–891. doi: 10.1021/acsphotonics.3c01253
Lan, Y., Wang, X., and Wang, Y. (2021). Spatio-temporal sequential memory model with Mini-column neural network. Front. Neurosci. 15:650430. doi: 10.3389/fnins.2021.650430
Latimer, K. W., Rieke, F., and Pillow, J. W. (2019). Inferring synaptic inputs from spikes with a conductance-based neural encoding model. eLife 8:e47012. doi: 10.7554/elife.47012
Lebedev, M. A., and Nicolelis, M. A. L. (2006). Brain–machine interfaces: past, present and future. Trends Neurosci. 29, 536–546. doi: 10.1016/j.tins.2006.07.004
Lebedev, M. A., and Nicolelis, M. A. L. (2017). Brain-machine interfaces: from basic science to neuroprostheses and neurorehabilitation. Physiol. Rev. 97, 767–837. doi: 10.1152/physrev.00027.2016
Lee, A. L., Campbell, L. B., and Sapolsky, R. M. (2010). Neighbor effects of neurons bearing protective transgenes. Brain Res. 1339, 70–75. doi: 10.1016/j.brainres.2010.04.037
Lee, J. H., Delbruck, T., and Pfeiffer, M. (2016). Training deep spiking neural networks using backpropagation. Front. Neurosci. 10:508. doi: 10.3389/fnins.2016.00508
Lee, S.-T., and Lee, J.-H. (2020). Neuromorphic computing using NAND Flash memory architecture with pulse width modulation scheme. Front. Neurosci. 14:571292. doi: 10.3389/fnins.2020.571292
Levin, M. (2021). Bioelectric signaling: reprogrammable circuits underlying embryogenesis, regeneration, and cancer. Cell 184, 1971–1989. doi: 10.1016/j.cell.2021.02.034
Li, Z., Tang, W., Zhang, B., Yang, R., and Miao, X. (2023). Emerging memristive neurons for neuromorphic computing and sensing. Sci. Technol. Adv. Mater. 24, 6–8. doi: 10.1080/14686996.2023.2188878
Li, R., Yue, Z., Luan, H., Dong, Y., Chen, X., and Gu, M. (2024a). Multimodal artificial synapses for neuromorphic application. Research 7:0427. doi: 10.34133/research.0427
Li, R., Zhao, G., Muir, D. R., Ling, Y., Burelo, K., Khoe, M., et al. (2024b). Real-time sub-milliwatt epilepsy detection implemented on a spiking neural network edge inference processor. Comput. Biol. Med. 183:109225. doi: 10.1016/j.compbiomed.2024.109225
Liao, J., et al. (2024). A spiking neural network decoder for implantable brain machine interfaces and its sparsity-aware deployment on RISC-V microcontrollers. arXiv. doi: 10.48550/arxiv.2405.02146
Lindsay, G. W. (2020). Convolutional neural networks as a model of the visual system: past, present, and future. J. Cogn. Neurosci. 33, 2017–2031. doi: 10.1162/jocn_a_01544
Linne, M.-L. (2024). Computational modeling of neuron–glia signaling interactions to unravel cellular and neural circuit functioning. Curr. Opin. Neurobiol. 85:102838. doi: 10.1016/j.conb.2023.102838
Liu, S.-C., and Delbruck, T. (2010). Neuromorphic sensory systems. Curr. Opin. Neurobiol. 20, 288–295. doi: 10.1016/j.conb.2010.03.007
Liu, Z., et al. (2020). Neural signal analysis with memristor arrays towards high-efficiency brain–machine interfaces. Nat. Commun. 11, 4–6. doi: 10.1038/s41467-020-18105-4
Loiselle, S., et al. (2006) ' Exploration of rank order coding with spiking neural networks for speech recognition,' Proceedings of 2005 IEEE International Joint Conference on Neural Networks, 2005, 4, pp. 2076–2080.
Lumer, E. (1997). Neural dynamics in a model of the thalamocortical system. I. Layers, loops and the emergence of fast synchronous rhythms. Cereb. Cortex 7, 207–227. doi: 10.1093/cercor/7.3.207
Lutes, N., Nadendla, V. S. S., and Krishnamurthy, K. (2024). Convolutional spiking neural networks for intent detection based on anticipatory brain potentials using electroencephalogram. Sci. Rep. 14:8850. doi: 10.1038/s41598-024-59469-7
Maass, W. (1996). Networks of spiking neurons: the third generation of neural network models. Electr. Colloq. Comput. Complex. 3, 5–8.
Madan, E., Pelham, C. J., Nagane, M., Parker, T. M., Canas-Marques, R., Fazio, K., et al. (2019). Flower isoforms promote competitive growth in cancer. Nature 572, 260–264. doi: 10.1038/s41586-019-1429-3
Mainen, Z. F., and Sejnowski, T. J. (1996). Influence of dendritic structure on firing pattern in model neocortical neurons. Nature 382, 363–366. doi: 10.1038/382363a0
Marcucci, L., Canato, M., Protasi, F., Stienen, G. J. M., and Reggiani, C. (2018). A 3D diffusional-compartmental model of the calcium dynamics in cytosol, sarcoplasmic reticulum and mitochondria of murine skeletal muscle fibers. PLoS One 13:e0201050. doi: 10.1371/journal.pone.0201050
Markram, H. (2012). The human brain project. Sci. Am. 306, 50–55. doi: 10.1038/scientificamerican0612-50
Marr, D. (1969). A theory of cerebellar cortex. J. Physiol. 202, 437–470. doi: 10.1113/jphysiol.1969.sp008820
Mayr, C. G., Sheik, S., Bartolozzi, C., and Chicca, E. (2016). Editorial: synaptic plasticity for neuromorphic systems. Front. Neurosci. 10:214. doi: 10.3389/fnins.2016.00214
McGinley, M. J., Vinck, M., Reimer, J., Batista-Brito, R., Zagha, E., Cadwell, C. R., et al. (2015). Waking state: rapid variations modulate neural and behavioral responses. Neuron 87, 1143–1161. doi: 10.1016/j.neuron.2015.09.012
Mcintyre, C. C., and Foutz, T. J. (2013). Computational modeling of deep brain stimulation. Handb. Clin. Neurol., 116:55–61. doi: 10.1016/b978-0-444-53497-2.00005-x
Mead, C. (1989) Analog VLSI and neural systems. Available online at: http://ci.nii.ac.jp/ncid/BA06849352.
Mead, C., and Conway, L. (1978) Introduction to VLSI systems. Available online at: http://portal.acm.org/citation.cfm?id=578480.
Mead, C., and Ismail, M. (1989). Analog VLSI implementation of neural systems. Kluwer Int. Ser. Eng. Comput. Sci.. 80, 30–50. doi: 10.1007/978-1-4613-1639-8
Mehonic, A., Ielmini, D., Roy, K., Mutlu, O., Kvatinsky, S., Serrano-Gotarredona, T., et al. (2024). Roadmap to neuromorphic computing with emerging technologies. APL Materials 12, 18–21. doi: 10.1063/5.0179424
Merabet, L. B., and Pascual-Leone, A. (2009). Neural reorganization following sensory loss: the opportunity of change. Nat. Rev. Neurosci. 11, 44–52. doi: 10.1038/nrn2758
Merk, T., et al. (2023). Invasive neurophysiology and whole brain connectomics for neural decoding in patients with brain implants. Res. Sq. doi: 10.21203/rs.3.rs-3212709/v1
Merolla, P. A., Arthur, J. V., Alvarez-Icaza, R., Cassidy, A. S., Sawada, J., Akopyan, F., et al. (2014). A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 668–673. doi: 10.1126/science.1254642
Meyer, A. F., Williamson, R. S., Linden, J. F., and Sahani, M. (2017). Models of neuronal stimulus-response functions: elaboration, estimation, and evaluation. Front. Syst. Neurosci. 10:109. doi: 10.3389/fnsys.2016.00109
Miziev, S., Pawlak, W. A., and Howard, N. (2024). Comparative analysis of energy transfer mechanisms for neural implants. Front. Neurosci. 17:1320441. doi: 10.3389/fnins.2023.1320441
Moradi, S., and Indiveri, G. (2013). An event-based neural network architecture with an asynchronous programmable synaptic memory. IEEE Trans. Biomed. Circ. Syst. 8, 98–107. doi: 10.1109/tbcas.2013.2255873
Moskovitz, T. H., Roy, N. A., and Pillow, J. W. (2018). A comparison of deep learning and linear-nonlinear cascade approaches to neural encoding. bioRxiv. doi: 10.1101/463422
Müller, M., Hu, H., Dietz, M., Beiderbeck, B., Ferreiro, D. N., and Pecka, M. (2023). Temporal hyper-precision of brainstem neurons alters spatial sensitivity of binaural auditory processing with cochlear implants. Front. Neurosci. 16:1021541. doi: 10.3389/fnins.2022.1021541
Munn, B. R., Müller, E. J., Medel, V., Naismith, S. L., Lizier, J. T., Sanders, R. D., et al. (2023). Neuronal connected burst cascades bridge macroscale adaptive signatures across arousal states. Nat. Commun. 14, 6–11. doi: 10.1038/s41467-023-42465-2
Murray, A. F., and Edwards, P. J. (1994). Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training. IEEE Trans. Neural Netw. 5, 792–802. doi: 10.1109/72.317730
Musk, E. (2019). An integrated brain-machine interface platform with thousands of channels. J. Med. Internet Res. 21:e16194. doi: 10.2196/16194
Nápoles, G., Jastrzebska, A., Grau, I., and Salgueiro, Y. (2024). Backpropagation through time learning for recurrence-aware long-term cognitive networks. Knowl.-Based Syst. 295:111825. doi: 10.1016/j.knosys.2024.111825
Neri, P. (2015). The elementary operations of human vision are not reducible to template-matching. PLoS Comput. Biol. 11:e1004499. doi: 10.1371/journal.pcbi.1004499
Neuralink Compression Challenge (2025). Available online at: https://content.neuralink.com/compression-challenge/README.html.
Neuromorphic Electronic Systems (1990). Available online at: https://ieeexplore.ieee.org/document/58356.
Nirmal, K. A., Kumbhar, D. D., Kesavan, A. V., Dongale, T. D., and Kim, T. G. (2024). Advancements in 2D layered material memristors: unleashing their potential beyond memory. Npj 2D Mater. Appl. 8, 17–21. doi: 10.1038/s41699-024-00522-4
Niu, C. M., Luo, Q., Chou, C. H., Liu, J., Hao, M., and Lan, N. (2020). Neuromorphic model of reflex for realtime human-like compliant control of prosthetic hand. Ann. Biomed. Eng. 49, 673–688. doi: 10.1007/s10439-020-02596-9
Nye, L. (2023). Digital twins for patient care via knowledge graphs and closed-form continuous-time liquid neural networks. arXiv. doi: 10.48550/arxiv.2307.04772
Park, J., Akbaba, G. E., Sharma, N., das, R., Vinikoor, T., Liu, Y., et al. (2025). Electrically active biomaterials for stimulation and regeneration in tissue engineering. J. Biomed. Mater. Res. A 113:e37871. doi: 10.1002/jbm.a.37871
Pathak, A., Roy, D., and Banerjee, A. (2022). Whole-brain network models: from physics to bedside. Front. Comput. Neurosci. 16:866517. doi: 10.3389/fncom.2022.866517
Patiño-Saucedo, A., Rostro-González, H., Serrano-Gotarredona, T., and Linares-Barranco, B. (2022). Liquid state machine on SpiNNaker for Spatio-temporal classification tasks. Front. Neurosci. 16:819063. doi: 10.3389/fnins.2022.819063
Pawlak, W. A., et al. (2024). Exploring liquid neural networks on LOIHI-2. arXiv. doi: 10.48550/arxiv.2407.20590
Peres, L., and Rhodes, O. (2022). Parallelization of neural processing on neuromorphic hardware. Front. Neurosci. 16:867027. doi: 10.3389/fnins.2022.867027
Petrovici, M. A., Bill, J., Bytschok, I., Schemmel, J., and Meier, K. (2016). Stochastic inference with spiking neurons in the high-conductance state. Phys. Rev. E 94, 1–4. doi: 10.1103/physreve.94.042312
Pichler, W. J., Adam, J., Watkins, S., Wuillemin, N., Yun, J., and Yerly, D. (2015). Drug hypersensitivity: how drugs stimulate T cells via pharmacological interaction with immune receptors. Int. Arch. Allergy Immunol. 168, 13–24. doi: 10.1159/000441280
Pillow, J. W., Shlens, J., Paninski, L., Sher, A., Litke, A. M., Chichilnisky, E. J., et al. (2008). Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature 454, 995–999. doi: 10.1038/nature07140
Polikov, V. S., Tresco, P. A., and Reichert, W. M. (2005). Response of brain tissue to chronically implanted neural electrodes. J. Neurosci. Methods 148, 1–18. doi: 10.1016/j.jneumeth.2005.08.015
Ponulak, F. (2005) ReSuMe-new supervised learning method for spiking neural networks. Institute of Control and Information Engineering, Poznań University of Technology.
Ponulak, F., and Kasiński, A. (2009). Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting. Neural Comput. 22, 467–510. doi: 10.1162/neco.2009.11-08-901
Posch, C., Serrano-Gotarredona, T., Linares-Barranco, B., and Delbruck, T. (2014). Retinomorphic event-based vision sensors: bioinspired cameras with spiking output. Proc. IEEE 102, 1470–1484. doi: 10.1109/jproc.2014.2346153
Prosky, J., Cagle, J., Sellers, K. K., Gilron, R., de Hemptinne, C., Schmitgen, A., et al. (2021). Practical closed-loop strategies for deep brain stimulation: lessons from chronic pain. Front. Neurosci. 15:762097. doi: 10.3389/fnins.2021.762097
Qi, Y., Chen, J., and Wang, Y. (2023). Neuromorphic computing facilitates deep brain-machine fusion for high-performance neuroprosthesis. Front. Neurosci. 17:1153985. doi: 10.3389/fnins.2023.1153985
Qi, Y., Kang, S.-K., and Fang, H. (2023). Advanced materials for implantable neuroelectronics. MRS Bull. 48, 475–483. doi: 10.1557/s43577-023-00540-5
Qiao, N., Mostafa, H., Corradi, F., Osswald, M., Stefanini, F., Sumislawska, D., et al. (2015). A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses. Front. Neurosci. 9:141. doi: 10.3389/fnins.2015.00141
Quan, J., Liu, Z., Li, B., Zeng, C., and Luo, J. (2023). 55 nm CMOS mixed-signal neuromorphic circuits for constructing energy-efficient reconfigurable SNNs. Electronics 12:4147. doi: 10.3390/electronics12194147
Rall, W. (1977). Core conductor theory and cable properties of neurons. Compr. Physiol., 39–97. doi: 10.1002/cphy.cp010103
Ramezani, M., Kim, J. H., Liu, X., Ren, C., Alothman, A., de-Eknamkul, C., et al. (2024). High-density transparent graphene arrays for predicting cellular calcium activity at depth from surface potential recordings. Nat. Nanotechnol. 19, 504–513. doi: 10.1038/s41565-023-01576-z
Ramezanian-Panahi, M., Abrevaya, G., Gagnon-Audet, J. C., Voleti, V., Rish, I., and Dumas, G. (2022). Generative models of brain dynamics. Front. Artif. Intell. 5:807406. doi: 10.3389/frai.2022.807406
Rao, J. S., Kellom, M., Kim, H. W., Rapoport, S. I., and Reese, E. A. (2012). Neuroinflammation and synaptic loss. Neurochem. Res. 37, 903–910. doi: 10.1007/s11064-012-0708-2
Rao, R. P. N., Stocco, A., Bryan, M., Sarma, D., Youngquist, T. M., Wu, J., et al. (2014). A direct brain-to-brain interface in humans. PLoS One 9:e111332. doi: 10.1371/journal.pone.0111332
Rathi, N., Chakraborty, I., Kosta, A., Sengupta, A., Ankit, A., Panda, P., et al. (2022). Exploring neuromorphic computing based on spiking neural networks: algorithms to hardware. ACM Comput. Surv. 55, 1–49. doi: 10.1145/3571155
Reardon, S. (2017). AI-controlled brain implants for mood disorders tested in people. Nature 551, 549–550. doi: 10.1038/nature.2017.23031
Rhodes, O., Peres, L., Rowley, A. G. D., Gait, A., Plana, L. A., Brenninkmeijer, C., et al. (2019). Real-time cortical simulation on neuromorphic hardware. Philos. Trans. R. Soc. Math. Phys. Eng. Sci. 378:20190160. doi: 10.1098/rsta.2019.0160
Rinzel, J., and Ermentrout, G. B. (1989) Analysis of neural excitability and oscillations, MIT Press, 135–169. Available online at: https://dl.acm.org/citation.cfm?id=94613.
Roa, J. A., Marcuse, L., Fields, M., Vega-Talbott, M. L., Yoo, J. Y., Wolf, S. M., et al. (2023). Long-term outcomes after responsive neurostimulation for treatment of refractory epilepsy: a single-center experience of 100 cases. J. Neurosurg. 139, 1463–1470. doi: 10.3171/2023.2.jns222116
Rochford, A. E., Carnicer-Lombarte, A., Curto, V. F., Malliaras, G. G., and Barone, D. G. (2019). When bio meets technology: biohybrid neural interfaces. Adv. Mater. 32:e1903182. doi: 10.1002/adma.201903182
Rommelfanger, N. J., Keck, C. H. C., Chen, Y., and Hong, G. (2021). Learning from the brain’s architecture: bioinspired strategies towards implantable neural interfaces. Curr. Opin. Biotechnol. 72, 8–12. doi: 10.1016/j.copbio.2021.07.020
Roth, A., and Bahl, A. (2009). Divide et impera: optimizing compartmental models of neurons step by step. J. Physiol. 587, 1369–1370. doi: 10.1113/jphysiol.2009.170944
Roy, K., Jaiswal, A., and Panda, P. (2019). Towards spike-based machine intelligence with neuromorphic computing. Nature 575, 607–617. doi: 10.1038/s41586-019-1677-2
Rueckauer, B., Lungu, I. A., Hu, Y., Pfeiffer, M., and Liu, S. C. (2017). Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Front. Neurosci. 11:682. doi: 10.3389/fnins.2017.00682
Saadatinia, M., and Salimi-Badr, A. (2024). An explainable deep learning-based method for schizophrenia diagnosis using generative data-augmentation. IEEE Access 12, 98379–98392. doi: 10.1109/access.2024.3428847
Saddler, M. R., and McDermott, J. H. (2024). Models optimized for real-world tasks reveal the task-dependent necessity of precise temporal coding in hearing. Nat. Commun. 15, 1–5. doi: 10.1038/s41467-024-54700-5
Sadegh-Zadeh, S.-A., Bahrami, M., Soleimani, O., and Ahmadi, S. (2024). Neural reshaping: the plasticity of human brain and artificial intelligence in the learning process. Am. J. Neurodegener. Dis. 13, 34–48. doi: 10.62347/nhkd7661
Salatino, J. W., Ludwig, K. A., Kozai, T. D. Y., and Purcell, E. K. (2017). Glial responses to implanted electrodes in the brain. Nat. Biomed. Eng. 1, 862–877. doi: 10.1038/s41551-017-0154-1
Samee, N. A., Mahmoud, N. F., Aldhahri, E. A., Rafiq, A., Muthanna, M. S. A., and Ahmad, I. (2022). RNN and BiLSTM fusion for accurate automatic epileptic seizure diagnosis using EEG signals. Life 12:1946. doi: 10.3390/life12121946
Santoro, E. M. F., et al. (2024). Purkinje cell models: past, present and future. Front. Comput. Neurosci. 18:1426653. doi: 10.3389/fncom.2024.1426653
Saxena, S., Schieber, M. H., Thakor, N. V., and Sarma, S. V. (2012). Aggregate input-output models of neuronal populations. IEEE Trans. Biomed. Eng. 59, 2030–2039. doi: 10.1109/tbme.2012.2196699
Schalk, G., McFarland, D. J., Hinterberger, T., Birbaumer, N., and Wolpaw, J. R. (2004). BCI2000: A general-purpose brain-computer Interface (BCI) system. IEEE Trans. Biomed. Eng. 51, 1034–1043. doi: 10.1109/tbme.2004.827072
Schaller, R. R. (1997). Moore’s law: past, present and future. IEEE Spectr. 34, 52–59. doi: 10.1109/6.591665
Schemmel, J., et al. (2020). Accelerated analog neuromorphic computing. arXiv. doi: 10.48550/arxiv.2003.11996
Schiff, N. D., Giacino, J. T., Butson, C. R., Choi, E. Y., Baker, J. L., O’Sullivan, K. P., et al. (2023). Thalamic deep brain stimulation in traumatic brain injury: a phase 1, randomized feasibility study. Nat. Med. 29, 3162–3174. doi: 10.1038/s41591-023-02638-4
Schuman, C. D., Kulkarni, S. R., Parsa, M., Mitchell, J. P., Date, P., and Kay, B. (2022). Opportunities for neuromorphic computing algorithms and applications. Nat. Comput. Sci. 2, 10–19. doi: 10.1038/s43588-021-00184-y
Schuman, C. D., et al. (2017). A survey of neuromorphic computing and neural networks in hardware. arXiv. doi: 10.48550/arxiv.1705.06963
Schuman, C.D., et al. (2019) Non-traditional input encoding schemes for spiking neuromorphic systems,' 2022 International Joint Conference on Neural Networks (IEEE: IJCNN), pp. 1–10.
Seo, J.-S., and Seok, M. (2015) ' Digital CMOS neuromorphic processor design featuring unsupervised online learning,' 2015 IFIP/IEEE International Conference on Very Large Scale Integration (IEEE: VLSI-SoC), pp. 49–51.
Serrano-Amenos, C., Heydari, P., Liu, C. Y., do, A. H., and Nenadic, Z. (2023). Power budget of a skull unit in a fully-implantable brain-computer interface: bio-heat model. IEEE Trans. Neural Syst. Rehabil. Eng. 31, 4029–4039. doi: 10.1109/tnsre.2023.3323916
Shah, N. P., Phillips, A. J., Madugula, S., Lotlikar, A., Gogliettino, A. R., Hays, M. R., et al. (2024). Precise control of neural activity using dynamically optimized electrical stimulation. eLife 13, 5–11. doi: 10.7554/elife.83424
Shahsavari, M., Thomas, D., van Gerven, M., Brown, A., and Luk, W. (2023). Advancements in spiking neural network communication and synchronization techniques for event-driven neuromorphic systems. Array 20:100323. doi: 10.1016/j.array.2023.100323
Shanechi, M. M., Orsborn, A. L., and Carmena, J. M. (2016). Robust brain-machine interface design using optimal feedback control modeling and adaptive point process filtering. PLoS Comput. Biol. 12:e1004730. doi: 10.1371/journal.pcbi.1004730
Sharifshazileh, M., Burelo, K., Sarnthein, J., and Indiveri, G. (2021). An electronic neuromorphic system for real-time detection of high frequency oscillations (HFO) in intracranial EEG. Nat. Commun. 12, 1–10. doi: 10.1038/s41467-021-23342-2
Sheu, Y.-H. (2020). Illuminating the black box: interpreting deep neural network models for psychiatric research. Front. Psych. 11:551299. doi: 10.3389/fpsyt.2020.551299
Shim, G., Breinyn, I. B., Martínez-Calvo, A., Rao, S., and Cohen, D. J. (2024). Bioelectric stimulation controls tissue shape and size. Nat. Commun. 15, 1–12. doi: 10.1038/s41467-024-47079-w
Shrestha, S. B., and Song, Q. (2014). Adaptive learning rate of SpikeProp based on weight convergence analysis. Neural Netw. 63, 185–198. doi: 10.1016/j.neunet.2014.12.001
Simoncelli, E. P., and Heeger, D. J. (1998). A model of neuronal responses in visual area MT. Vis. Res. 38, 743–761. doi: 10.1016/s0042-6989(97)00183-1
Singh, M. F., Braver, T. S., Cole, M., and Ching, S. N. (2025). Precision data-driven modeling of cortical dynamics reveals person-specific mechanisms underpinning brain electrophysiology. Proc. Natl. Acad. Sci. 122:e2409577121. doi: 10.1073/pnas.2409577121
Song, H., Hsieh, T. H., Yeon, S. H., Shu, T., Nawrot, M., Landis, C. F., et al. (2024). Continuous neural control of a bionic limb restores biomimetic gait after amputation. Nat. Med. 30, 2010–2019. doi: 10.1038/s41591-024-02994-9
Stanslaski, S., Afshar, P., Cong, P., Giftakis, J., Stypulkowski, P., Carlson, D., et al. (2012). Design and validation of a fully implantable, chronic, closed-loop neuromodulation device with concurrent sensing and stimulation. IEEE Trans. Neural Syst. Rehabil. Eng. 20, 410–421. doi: 10.1109/tnsre.2012.2183617
Su, J., Song, Y., Zhu, Z., Huang, X., Fan, J., Qiao, J., et al. (2024). Cell-cell communication: new insights and clinical implications. Signal Transduct. Target. Ther. 9:196. doi: 10.1038/s41392-024-01888-z
Su, F., Yuan, P., Wang, Y., and Zhang, C. (2016). The superior fault tolerance of artificial neural network training with a fault/noise injection-based genetic algorithm. Protein Cell 7, 735–748. doi: 10.1007/s13238-016-0302-5
Subramoney, A., Bellec, G., Scherr, F., Legenstein, R., and Maass, W. (2024). Fast learning without synaptic plasticity in spiking neural networks. Sci. Rep. 14:8557. doi: 10.1038/s41598-024-55769-0
Suffczynski, P., Kalitzin, S., and Da Silva, F. H. L. (2004). Dynamics of non-convulsive epileptic phenomena modeled by a bistable neuronal network. Neuroscience 126, 467–484. doi: 10.1016/j.neuroscience.2004.03.014
Sung, S. H., Kim, T. J., Shin, H., Im, T. H., and Lee, K. J. (2022). Simultaneous emulation of synaptic and intrinsic plasticity using a memristive synapse. Nat. Commun. 13, 1–3. doi: 10.1038/s41467-022-30432-2
Supèr, H., and Roelfsema, P. R. (2004). Chronic multiunit recordings in behaving animals: advantages and limitations. Prog. Brain Res., 147, 263–282. doi: 10.1016/s0079-6123(04)47020-4
Tait, A. N., Ferreira de Lima, T., Nahmias, M. A., Miller, H. B., Peng, H. T., Shastri, B. J., et al. (2019). Silicon photonic modulator neuron. Phys. Rev. Appl. 11, 1–3. doi: 10.1103/physrevapplied.11.064043
Takaghaj, S. M., and Sampson, J. (2024). Rouser: robust SNN training using adaptive threshold learning. arXiv. doi: 10.48550/arxiv.2407.19566
Tang, G., Vadivel, K., Xu, Y., Bilgic, R., Shidqi, K., Detterer, P., et al. (2023). SENECA: building a fully digital neuromorphic processor, design trade-offs and challenges. Front. Neurosci. 17:1187252. doi: 10.3389/fnins.2023.1187252
Tanikawa, S., Ebisu, Y., Sedlačík, T., Semba, S., Nonoyama, T., Kurokawa, T., et al. (2023). Engineering of an electrically charged hydrogel implanted into a traumatic brain injury model for stepwise neuronal tissue reconstruction. Sci. Rep. 13:2233. doi: 10.1038/s41598-023-28870-z
Tomko, M., Benuskova, L., and Jedlicka, P. (2021). A new reduced-morphology model for CA1 pyramidal cells and its validation and comparison with other models using HippoUnit. Sci. Rep. 11:7615. doi: 10.1038/s41598-021-87002-7
Topalovic, U., Barclay, S., Ling, C., Alzuhair, A., Yu, W., Hokhikyan, V., et al. (2023). A wearable platform for closed-loop stimulation and recording of single-neuron and local field potential activity in freely moving humans. Nat. Neurosci. 26, 517–527. doi: 10.1038/s41593-023-01260-4
Vakilipour, P., and Fekrvand, S. (2024). Brain-to-brain interface technology: a brief history, current state, and future goals. Int. J. Dev. Neurosci. 84, 351–367. doi: 10.1002/jdn.10334
Valle, G., Katic Secerovic, N., Eggemann, D., Gorskii, O., Pavlova, N., Petrini, F. M., et al. (2024). Biomimetic computer-to-brain communication enhancing naturalistic touch sensations via peripheral nerve stimulation. Nat. Commun. 15, 3–8. doi: 10.1038/s41467-024-45190-6
Wan, W., Kubendran, R., Schaefer, C., Eryilmaz, S. B., Zhang, W., Wu, D., et al. (2022). A compute-in-memory chip based on resistive random-access memory. Nature 608, 504–512. doi: 10.1038/s41586-022-04992-8
Wang, S., Wang, B., Shang, D., Zhang, K., Yan, X., and Zhang, X. (2022). Ion channel dysfunction in astrocytes in neurodegenerative diseases. Front. Physiol. 13:814285. doi: 10.3389/fphys.2022.814285
Wang, K., Wei, A., Fu, Y., Wang, T., Gao, X., Fu, B., et al. (2022). State-dependent modulation of thalamocortical oscillations by gamma light flicker with different frequencies, intensities, and duty cycles. Front. Neuroinform. 16:968907. doi: 10.3389/fninf.2022.968907
Wang, C., Yan, H., Huang, W., Sheng, W., Wang, Y., Fan, Y. S., et al. (2023). Neural encoding with unsupervised spiking convolutional neural network. Commun. Biol. 6:880. doi: 10.1038/s42003-023-05257-4
Wang, B., et al. (2023) ' A new ANN-SNN conversion method with high accuracy, low latency and good robustness,' Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, pp. 3067–3075.
Ward, M., and Rhodes, O. (2022). Beyond LIF neurons on neuromorphic hardware. Front. Neurosci. 16:881598. doi: 10.3389/fnins.2022.881598
Watanabe, M. (2023). AI consciousness and neuroscientifically plausible “seamless” mind-uploading. Open Access Govern. 40, 284–285. doi: 10.56367/oag-040-10981
Willsey, M. S., Nason-Tomaszewski, S. R., Ensel, S. R., Temmar, H., Mender, M. J., Costello, J. T., et al. (2022). Real-time brain-machine interface in non-human primates achieves high-velocity prosthetic finger movements using a shallow feedforward neural network decoder. Nat. Commun. 13, 8–12. doi: 10.1038/s41467-022-34452-w
Wilson, B. S., and Dorman, M. F. (2008). Cochlear implants: a remarkable past and a brilliant future. Hear. Res. 242, 3–21. doi: 10.1016/j.heares.2008.06.005
Wuillemin, N., Ballmer-Weber, B., Schlapbach, C., Jörg, L., and Yerly, D. (2022). The activation pattern of drug-reacting T cells has an impact on the clinical picture of hypersensitivity reactions. Fron. Allergy 3:804605. doi: 10.3389/falgy.2022.804605
Wunderlich, T. C., and Pehle, C. (2021). Event-based backpropagation can compute exact gradients for spiking neural networks. Sci. Rep. 11:12829. doi: 10.1038/s41598-021-91786-z
Xiang, S., et al. (2023). Neuromorphic speech recognition with photonic convolutional spiking neural networks. IEEE J. Select. Top. Quant. Electr., 29, 1–7. doi: 10.1109/jstqe.2023.3240248
Yamazaki, K., Vo-Ho, V. K., Bulsara, D., and le, N. (2022). Spiking neural networks and their applications: a review. Brain Sci. 12:863. doi: 10.3390/brainsci12070863
Yang, L., Wang, Z., Wang, G., Liang, L., Liu, M., and Wang, J. (2024). Brain-inspired modular echo state network for EEG-based emotion recognition. Front. Neurosci. 18:1305284. doi: 10.3389/fnins.2024.1305284
Yao, M., Richter, O., Zhao, G., Qiao, N., Xing, Y., Wang, D., et al. (2024). Spike-based dynamic computing with asynchronous sensing-computing neuromorphic chip. Nat. Commun. 15, 1–3. doi: 10.1038/s41467-024-47811-6
Yik, J., et al. (2025). The neurobench framework for benchmarking neuromorphic computing algorithms and systems. Nat. Commun. 16. doi: 10.1038/s41467-025-56739-4
Yin, Y., Wang, X., and Feng, T. (2024). Noninvasive brain stimulation for improving cognitive deficits and clinical symptoms in attention-deficit/hyperactivity disorder: a systematic review and Meta-analysis. Brain Sci. 14:1237. doi: 10.3390/brainsci14121237
Yousif, N., and Liu, X. (2007). Modeling the current distribution across the depth electrode–brain interface in deep brain stimulation. Expert Rev. Med. Devices 4, 623–631. doi: 10.1586/17434440.4.5.623
Yu, Z., Liu, J. K., Jia, S., Zhang, Y., Zheng, Y., Tian, Y., et al. (2020). Toward the next generation of retinal neuroprosthesis: visual computation with spikes. Engineering 6, 449–461. doi: 10.1016/j.eng.2020.02.004
Zenke, F., and Ganguli, S. (2018). SuperSpike: supervised learning in multilayer spiking neural networks. Neural Comput. 30, 1514–1541. doi: 10.1162/neco_a_01086
Zhang, W., Gao, B., Tang, J., Yao, P., Yu, S., Chang, M. F., et al. (2020). Neuro-inspired computing chips. Nat. Electr. 3, 371–382. doi: 10.1038/s41928-020-0435-7
Zhang, G., Li, B., Wu, J., Wang, R., Lan, Y., Sun, L., et al. (2019). A low-cost and high-speed hardware implementation of spiking neural network. Neurocomputing 382, 106–115. doi: 10.1016/j.neucom.2019.11.045
Zhang, A., Mandeville, E. T., Xu, L., Stary, C. M., Lo, E. H., and Lieber, C. M. (2023). Ultraflexible endovascular probes for brain recording through micrometer-scale vasculature. Science 381, 306–312. doi: 10.1126/science.adh3916
Zhang, Z., Savolainen, O. W., and Constandinou, T. G. (2022). Algorithm and hardware considerations for real-time neural signal on-implant processing. J. Neural Eng. 19:016029. doi: 10.1088/1741-2552/ac5268
Zhang, L., Yang, J., Shi, C., Lin, Y., He, W., Zhou, X., et al. (2021). A cost-efficient high-speed VLSI architecture for spiking convolutional neural network inference using time-step binary spike maps. Sensors 21:6006. doi: 10.3390/s21186006
Zhang, D., et al. (2017). Cascade and parallel convolutional recurrent neural networks on EEG-based intention recognition for brain computer interface. arXiv. doi: 10.48550/arxiv.1708.06578
Zhang, L., et al. (2024). Skin-inspired, sensory robots for electronic implants. Nat. Commun. 15, 1–5. doi: 10.1038/s41467-024-48903-z
Zhao, Y., Liu, Y., Kang, S., Sun, D., Liu, Y., Wang, X., et al. (2024). Peripheral nerve injury repair by electrical stimulation combined with graphene-based scaffolds. Front. Bioeng. Biotechnol. 12:1345163. doi: 10.3389/fbioe.2024.1345163
Zhou, B., Sun, P.-S. V., and Basu, A. (2023). ANN vs SNN: a case study for neural decoding in implantable brain-machine interfaces. arXiv. doi: 10.48550/arxiv.2312.15889
Zhu, Y., et al. (2021). Recent advances in emerging neuromorphic computing and perception devices. J. Phys. D Appl. Phys. 55:053002. doi: 10.1088/1361-6463/ac2868
Zlokapa, A., et al. (2022). Biological error correction codes generate fault-tolerant neural networks. arXiv. doi: 10.48550/arxiv.2202.12887
Zoltowski, D. M., and Pillow, J. W. (2018). Scaling the Poisson GLM to massive neural datasets through polynomial approximations. Adv. Neural Inf. Process Syst. 31, 3517–3527.
Zotey, V., Andhale, A., Shegekar, T., and Juganavar, A. (2023). Adaptive neuroplasticity in brain injury recovery: strategies and insights. Cureus 15:e45873. doi: 10.7759/cureus.45873
Keywords: neuromorphic computing, brain implants, spiking neural networks (SNNs), mixed-signal design, neurocomputational models, brain-computer interfaces (BCIs), biohybrid interfaces, data compression
Citation: Pawlak WA and Howard N (2025) Neuromorphic algorithms for brain implants: a review. Front. Neurosci. 19:1570104. doi: 10.3389/fnins.2025.1570104
Edited by:
Jim Harkin, Ulster University, United KingdomCopyright © 2025 Pawlak and Howard. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Wiktoria Agata Pawlak, d3Bhd2xha0BuaTJvLmNvbQ==