Skip to main content

REVIEW article

Front. Big Data, 12 April 2022
Sec. Big Data and AI in High Energy Physics
This article is part of the Research Topic Efficient AI in Particle Physics and Astrophysics View all 7 articles

Applications and Techniques for Fast Machine Learning in Science

Updated
\nAllison McCarn Deiana
Allison McCarn Deiana1*Nhan Tran,
Nhan Tran2,3*Joshua AgarJoshua Agar4Michaela BlottMichaela Blott5Giuseppe Di GuglielmoGiuseppe Di Guglielmo6Javier DuarteJavier Duarte7Philip HarrisPhilip Harris8Scott HauckScott Hauck9Mia LiuMia Liu10Mark S. NeubauerMark S. Neubauer11Jennifer NgadiubaJennifer Ngadiuba2Seda Ogrenci-MemikSeda Ogrenci-Memik3Maurizio PieriniMaurizio Pierini12Thea AarrestadThea Aarrestad12Steffen BhrSteffen Bähr13Jürgen BeckerJürgen Becker13Anne-Sophie BertholdAnne-Sophie Berthold14Richard J. BonventreRichard J. Bonventre15Toms E. Müller BravoTomás E. Müller Bravo16Markus DiefenthalerMarkus Diefenthaler17Zhen DongZhen Dong18Nick FritzscheNick Fritzsche14Amir GholamiAmir Gholami18Ekaterina GovorkovaEkaterina Govorkova12Dongning GuoDongning Guo3Kyle J. HazelwoodKyle J. Hazelwood2Christian HerwigChristian Herwig2Babar KhanBabar Khan19Sehoon KimSehoon Kim18Thomas KlijnsmaThomas Klijnsma2Yaling LiuYaling Liu20Kin Ho LoKin Ho Lo21Tri NguyenTri Nguyen8Gianantonio PezzulloGianantonio Pezzullo22Seyedramin RasoulinezhadSeyedramin Rasoulinezhad23Ryan A. RiveraRyan A. Rivera2Kate ScholbergKate Scholberg24Justin SeligJustin Selig25Sougata SenSougata Sen26Dmitri StrukovDmitri Strukov27William TangWilliam Tang28Savannah ThaisSavannah Thais28Kai Lukas UngerKai Lukas Unger13Ricardo VilaltaRicardo Vilalta29Belina von Krosigk,Belina von Krosigk13,30Shen WangShen Wang21Thomas K. WarburtonThomas K. Warburton31
  • 1Department of Physics, Southern Methodist University, Dallas, TX, United States
  • 2Fermi National Accelerator Laboratory, Batavia, IL, United States
  • 3Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL, United States
  • 4Department of Materials Science and Engineering, Lehigh University, Bethlehem, PA, United States
  • 5Xilinx Research, Dublin, Ireland
  • 6Department of Computer Science, Columbia University, New York, NY, United States
  • 7Department of Physics, University of California, San Diego, San Diego, CA, United States
  • 8Massachusetts Institute of Technology, Cambridge, MA, United States
  • 9Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, United States
  • 10Department of Physics and Astronomy, Purdue University, West Lafayette, IN, United States
  • 11Department of Physics, University of Illinois Urbana-Champaign, Champaign, IL, United States
  • 12European Organization for Nuclear Research (CERN), Meyrin, Switzerland
  • 13Karlsruhe Institute of Technology, Karlsruhe, Germany
  • 14Institute of Nuclear and Particle Physics, Technische Universität Dresden, Dresden, Germany
  • 15Lawrence Berkeley National Laboratory, Berkeley, CA, United States
  • 16Department of Physics and Astronomy, University of Southampton, Southampton, United Kingdom
  • 17Thomas Jefferson National Accelerator Facility, Newport News, VA, United States
  • 18Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, United States
  • 19Department of Computer Science, Technical University Darmstadt, Darmstadt, Germany
  • 20Department of Bioengineering, Lehigh University, Bethlehem, PA, United States
  • 21Department of Physics, University of Florida, Gainesville, FL, United States
  • 22Department of Physics, Yale University, New Haven, CT, United States
  • 23Department of Engineering and IT, University of Sydney, Camperdown, NSW, Australia
  • 24Department of Physics, Duke University, Durham, NC, United States
  • 25Cerebras Systems, Sunnyvale, CA, United States
  • 26Birla Institute of Technology and Science, Pilani, India
  • 27Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA, United States
  • 28Department of Physics, Princeton University, Princeton, NJ, United States
  • 29Department of Computer Science, University of Houston, Houston, TX, United States
  • 30Department of Physics, Universität Hamburg, Hamburg, Germany
  • 31Department of Physics and Astronomy, Iowa State University, Ames, IA, United States

In this community review report, we discuss applications and techniques for fast machine learning (ML) in science—the concept of integrating powerful ML methods into the real-time experimental data processing loop to accelerate scientific discovery. The material for the report builds on two workshops held by the Fast ML for Science community and covers three main areas: applications for fast ML across a number of scientific domains; techniques for training and implementing performant and resource-efficient ML algorithms; and computing architectures, platforms, and technologies for deploying these algorithms. We also present overlapping challenges across the multiple scientific domains where common solutions can be found. This community report is intended to give plenty of examples and inspiration for scientific discovery through integrated and accelerated ML solutions. This is followed by a high-level overview and organization of technical advances, including an abundance of pointers to source material, which can enable these breakthroughs.

Overview

Machine learning (ML) is making a huge impact on our society and daily lives through advancements in computer vision, natural language processing, and autonomous vehicles, among others. ML is also powering scientific advances which can lead to future paradigm shifts in a broad range of domains, including particle physics, plasma physics, astronomy, neuroscience, chemistry, material science, and biomedical engineering. Scientific discoveries come from groundbreaking ideas and the capability to validate those ideas by testing nature at new scales-finer and more precise temporal and spatial resolution. This is leading to an explosion of data that must be interpreted, and ML is proving a powerful approach. The more efficiently we can test our hypotheses, the faster we can achieve discovery. To fully unleash the power of ML and accelerate discoveries, it is necessary to embed it into our scientific process, into our instruments and detectors.

It is in this spirit that the Fast Machine Learning for Science community1 has been built. Two workshops have also been organized through this growing community and are the source for this report. The community brings together an extremely wide-ranging group of domain experts who would rarely interact as a whole. One of the underlying benefits of ML is the portability and general applicability of the techniques that can enable experts from seemingly unrelated domains to find a common language. Scientists and engineers from particle physicists to networking experts and biomedical engineers are represented and can interact with experts in fundamental ML techniques and compute systems architects.

This report aims to summarize the progress in the community to understand how our scientific challenges overlap and where there are potential commonalities in data representations, ML approaches, and technology, including hardware and software platforms. Therefore, the content of the report includes the following: descriptions of a number of different scientific domains including existing work and applications for embedded ML; potential overlaps across scientific domains in data representation or system constraints; and an overview of state-of-the-art techniques for efficient machine learning and compute platforms, both cutting-edge and speculative technologies.

Necessarily, such a broad scope of topics cannot be comprehensive. For the scientific domains, we note that the contributions are examples of how ML methods are currently being or planned to be deployed. We hope that giving a glimpse into specific applications will inspire readers to find more novel use-cases and potential overlaps. The summaries of state-of-the-art techniques we provide relate to rapidly developing fields and, as such, may become out of date relatively quickly. The goal is to give non-experts an overview and taxonomy of the different techniques and a starting point for further investigation. To be succinct, we rely heavily on providing references to studies and other overviews while describing most modern methods.

We hope the reader finds this report both instructive and motivational. Feedback and input to this report, and to the larger community, are welcome and appreciated.

1. Introduction

In pursuit of scientific advancement across many domains, experiments are becoming exceedingly sophisticated in order to probe physical systems at increasingly smaller spatial resolutions and shorter timescales. These order of magnitude advancements have lead to explosions in both data volumes and richness leaving domain scientists to develop novel methods to handle growing data processing needs.

Simultaneously, machine learning (ML), or the use of algorithms that can learn directly from data, is leading to rapid advancements across many scientific domains (Carleo et al., 2019). Recent advancements have demonstrated that deep learning (DL) architectures based on structured deep neural networks are versatile and capable of solving a broad range of complex problems. The proliferation of large datasets like ImageNet (Russakovsky et al., 2015), computing, and DL software has led to the exploration of many different DL approaches each with their own advantages.

In this review paper, we will focus on the fusion of ML and experimental design to solve critical scientific problems by accelerating and improving data processing and real-time decision-making. We will discuss the myriad of scientific problems that require fast ML, and we will outline unifying themes across these domains that can lead to general solutions. Furthermore, we will review the current technology needed to make ML algorithms run fast, and we will present critical technological problems that, if solved, could lead to major scientific advancements. An important requirement for such advancements in science is the need for openness. It is vital for experts from domains that do not often interact to come together to develop transferable solutions and work together to develop open-source solutions.

Much of the advancements within ML over the past few years have originated from the use of heterogeneous computing hardware. In particular, the use of graphics processing units (GPUs) has enabled the development of large DL algorithms (Raina et al., 2009; Cireşan et al., 2010; Krizhevsky et al., 2012). The ability to train large artificial intelligence (AI) algorithms on large datasets has enabled algorithms that are capable of performing sophisticated tasks. In parallel with these developments, new types of DL algorithms have emerged that aim to reduce the number of operations so as to enable fast and efficient AI algorithms (Box 1).

Box 1. Fast machine learning in science.

Within this review paper, we refer to the concept of Fast Machine Learning in Science as the integration of ML into the experimental data processing infrastructure to enable and accelerate scientific discovery. Fusing powerful ML techniques with experimental design decreases the “time to science” and can range from embedding real-time feature extraction to be as close as possible to the sensor all the way to large-scale ML acceleration across distributed grid computing datacenters. The overarching theme is to lower the barrier to advanced ML techniques and implementations to make large strides in experimental capabilities across many seemingly different scientific applications. Efficient solutions require collaboration between domain experts, machine learning researchers, and computer architecture designers.

This paper is a review of the second annual Fast Machine Learning conference (University, 2020) and will build on the materials presented at this conference. It brings together experts from multiple scientific domains ranging from particle physicists to material scientists to health monitoring researchers with machine learning experts and computer systems architects. Figure 1 illustrates the spirit of the workshop series on which this paper is inspired and the topics covered in subsequent sections.

FIGURE 1
www.frontiersin.org

Figure 1. The concept behind this review paper is to find the confluence of domain-specific challenges, machine learning, and experiment and computer system architectures to accelerate science discovery.

As ML tools have become more sophisticated, much of the focus has turned to building very large algorithms that solve complicated problems, such as language translation and voice recognition. However, in the wake of these developments, a broad range of scientific applications have emerged that can benefit greatly from the rapid developments underway. Furthermore, these applications have diversified as people have to come to realize how to adapt their scientific approach so as to take advantage of the benefits originating from the AI revolution. This can include the capability of AI to classify events in real time, such as the identification of a collision of particles or a merger of gravitational waves. It can also include systems control, such as the response control from feedback mechanisms in plasmas and particle accelerators. The latency, bandwidth, and throughput restrictions and the reasons for such restrictions differ within each system. However, in all cases, accelerating ML is a driver in the design goal.

The design of low latency algorithms differs from other AI implementations in that we must tailor specific processing hardware to the task at hand to increase the overall algorithm performance. In particular, certain processor cores have been configured for optimized sparse matrix multiplications. Others have been optimized to maximize the total amount of compute. Processor design, and the design of algorithms around processors, often referred to as hardware ML co-design, is the focus of the work in this review. For example, in some cases, ultra-low latency inference times are needed to perform scientific measurements. One must efficiently design the algorithm to optimally utilize the hardware constraints available while preserving the algorithm performance within desired experimental requirements. This is the essence of hardware ML co-design.

The contents of this review are laid out as follows. In the Section 2, we will explore a broad range of scientific problems where Fast ML can act as a disruptive technology to the status quo and lead to a significant change in how we process data. Domain experts from seemingly different domains are examined. In Section 3, we describe data representations and experimental platform choices are common to many types of experiments. We will connect how Fast ML solutions can be generalized to low latency, highly resource-efficient, and domain-specific deep learning inference for many scientific applications. Finally in Section 4, to achieve this requires optimized hardware ML co-design from the algorithm design to the system architecture. We provide an overview of state-of-the-art techniques to train neural networks optimized for both performance and speed, survey various compute architectures to meet the needs of the experimental design and outline software solutions that optimize and enable the hardware deployment.

The goal of this paper is to bring together scientific opportunities, common solutions, and state-of-the-art technology into one single narrative. We hope this can contribute to accelerating the deployment of potentially transformative ML solutions to a broad range of scientific fields going forward.

2. Exemplars of Domain Applications

As scientific ecosystems grow rapidly in their speed and scale, new paradigms for data processing and reduction need to be integrated into system-level design. In this section, we explore requirements for accelerated and sophisticated data processing. Implementations of fast machine learning can appear greatly varied across domains and architectures but yet can have similar underlying data representations and needs for integrating machine learning. We enumerate here a broad sampling of scientific domains across seemingly unrelated tasks including their existing techniques and future needs. This will then lead to the next section where we discuss overlaps and common tasks.

We note here that this section has an emphasis on challenges addressed with deep learning techniques being proposed to address increasingly complex datasets in scientific applications, while sometimes referring to other classic ML algorithms. However, in all of these use-cases, there is understably a large history of domain algorithms and other classic, “shallow”, ML algorithms that have been developed. For example, see discussion of classic ML methods in Albertsson et al. (2018) and even the use of Boosted Decision Trees in real-time electronics systems (Gligorov and Williams, 2013). The performance and robustness of deep learning algorithms should be compared and understood with respect to previous methods, and similarly for simpler vs. more complex deep learning algorithms. A full survey of classic ML, deep learning, and domain algorithms for given applications, though, we consider beyond the scope of this paper.

In this section, we first have a detailed description of examples of Fast ML techniques being deployed at experiments for the Large Hadron Collider. Much rapid development has occurred for these experiments recently and gives an exemplar for how broad advancements can be made across various aspects of a specific domain. Then the following subsections will be briefer but lay out key challenges and areas of existing and potential applications of Fast ML across a number of other scientific domains.

2.1. Large Hadron Collider

The Large Hadron Collider (LHC) at CERN is the world's largest and highest-energy particle accelerator, where collisions between bunches of protons occur every 25 ns. To study the products of these collisions, several detectors are located along the ring at interaction points. The aim of these detectors is to measure the properties of the Higgs boson (Aad et al., 2012; Chatrchyan et al., 2012) with high precision and to search for new physics phenomena beyond the standard model of particle physics. Due to the extremely high frequency of 40 MHz at which proton bunches collide, the high multiplicity of secondary particles, and the large number of sensors, the detectors have to process and store data at enormous rates. For the two multipurpose experiments, CMS and ATLAS (Aad, 2008), comprised of tens of millions of readout channels, these rates are of the order of 100 Tb/s. Processing and storing this data presents severe challenges that are among the most critical for the execution of the LHC physics program.

The approach implemented by the detectors for data processing consists of an online processing stage, where the event is selected from a buffer and analyzed in real time, and an offline processing stage, in which data have been written to disk and are more thoroughly analyzed with sophisticated algorithms. The online processing system, called the trigger, reduces the data rate to a manageable level of 10Gb/s to be recorded for offline processing. The trigger is typically divided into multiple tiers. Due to the limited size of the on-detector buffers, the first tier (Level-1 or L1) utilizes FPGAs and ASICs capable of executing the filtering process with a maximum latency of O(1) μs. At the second stage, the high-level trigger (HLT), data are processed on a CPU-based computing farm located at the experimental site with a latency of up to 100 ms. Finally, the complete offline event processing is performed on a globally distributed CPU-based computing grid.

Maintaining the capabilities of this system will become even more challenging in the near future. In 2027, the LHC will be upgraded to the so-called High-Luminosity LHC (HL-LHC) where each collision will produce 5–7 times more particles, ultimately resulting in a total amount of accumulated data that will be one order of magnitude higher than achieved with the present accelerator. At the same time, the particle detectors will be made larger, more granular, and capable of processing data at ever-increasing rates. Therefore, the physics that can be extracted from the experiments will be limited by the accuracy of algorithms and computational resources.

Machine learning technologies offer promising solutions and enhanced capabilities in both of these areas, thanks to their capacity for extracting the most relevant information from high-dimensional data and to their highly parallelizable implementation on suitable hardware. In addition, there are even some early investigations exploring potential applications of machine learning using quantum computing (Wu et al., 2021). It is expected that a new generation of algorithms, if deployed at all stages of data-processing systems at the LHC experiments, will play a crucial part in maintaining, and hopefully improving, the physics performance. In the following sections, a few examples of the application of machine learning models to physics tasks at the LHC are reviewed, together with novel methods for their efficient deployment in both the real-time and offline data processing stages.

2.1.1. Event Reconstruction

The reconstruction of proton-proton collision events in the LHC detectors involves challenging pattern recognition tasks, given the large number [O(1,000)] of secondary particles produced and the high detector granularity. Specialized detector sub-systems and algorithms are used to reconstruct the different types and properties of particles produced in collisions. For example, the trajectories of charged particles are reconstructed from space point measurements in the inner silicon detectors, and the showers arising from particles traversing the calorimeters are reconstructed from clusters of activated sensors.

Traditional algorithms are highly tuned for physics performance in the current LHC collision environment, but are inherently sequential and scale poorly to the expected HL-LHC conditions. It is thus necessary to revisit existing reconstruction algorithms and ensure that both the physics and computational performance will be sufficient. Deep learning solutions are currently being explored for pattern recognition tasks, as a significant speedup can be achieved when harnessing heterogeneous computing and parallelizable and efficient ML that exploits AI-dedicated hardware. In particular, modern architectures such as graph neural networks (GNNs) are being explored for the reconstruction of particle trajectories, showers in the calorimeter as well as of the final individual particles in the event. Much of the following work has been conducted using the TrackML dataset (Calafiura et al., 2018), which simulates a generalized detector under HL-LHC-like pileup conditions. Quantifying the performance of these GNNs in actual experimental data is an ongoing point of study.

For reconstructing showers in calorimeters, GNNs have been found to predict the properties of the original incident particle with high accuracy starting from individual energy deposits. The work in Gray et al. (2020) proposes a graph formulation of pooling to dynamically learn the most important relationships between data via an intermediate clustering, and therefore removing the need for a predetermined graph structure. When applied to the CMS electromagnetic calorimeter, with single detector hits as inputs to predict the energy of the original incident particle, a 10% improvement is found over the traditional boosted decision tree (BDT) based approach.

GNNs have been explored for a similar calorimeter reconstruction task for the high-granularity calorimeters that will replace the current design for HL-LHC. The task will become even more challenging as such detectors will feature irregular sensor structure and shape (e.g., hexagonal sensor cells for CMS CMS Collaboration, 2017), high occupancy, and an unprecedented number of sensors. For this application, architectures such as EDGECONV (Wang et al., 2018b) and GRAVNET/GARNET (Qasim et al., 2019) have shown promising performance in the determination of the properties of single showers, yielding excellent energy resolution and high noise rejection (Ju et al., 2020). While these preliminary studies were focused on scenarios with low particle multiplicities, the scalability of the clustering performance to more realistic collision scenarios is still a subject of active development.

GNNs have also been extensively studied for charged particle tracking (the task of identifying and reconstructing the trajectories of individual particles in the detector) (Farrell et al., 2018; Tsaris et al., 2018; Duarte and Vlimant, 2020; Ju et al., 2020). The first approaches to this problem typically utilized edge-classification GNNs in a three-step process: graphs are constructed by algorithmically constructing edges between tracker hits in a point cloud, the graphs are processed through a GNN to predict edge weights (true edges that are part of true particle trajectories should be highly weighted and false edges should be lowly rated), and finally, the selected edges are grouped together to generate high-weight sub-graphs which form full track candidates, as shown in Figure 2.

FIGURE 2
www.frontiersin.org

Figure 2. High-level overview of the stages in a GNN-based tracking pipeline. Only a subset of the typical edge weights are shown for illustration purposes. (A) Graph construction, (B) edge classification, and (C) track construction.

There have been several studies building upon and optimizing this initial framework. The ExaTrkX collaboration has demonstrated performance improvements by incorporating a recurrent GNN structure (Ju et al., 2020) and re-embedding graphs prior to training the GNNs (Choma et al., 2020). Other work has shown that using an Interaction Network architecture (Battaglia et al., 2016) can substantially reduce the number of learnable parameters in the GNN (DeZoort et al., 2021); the authors also provide comprehensive comparisons between different graph construction and track building algorithms. Recent work has also explored alternate approaches that combine graph building, GNN inference, and track construction into a single algorithm that is trainable end-to-end; in particular, instance segmentation architectures have generated promising results (Thais and DeZoort, 2021).

Finally, a novel approach based on GNNs (Pata et al., 2021) has been proposed as an alternative solution to the so-called particle-flow algorithm that is used by LHC experiments to optimally reconstruct each individual particle produced in a collision by combining information from the calorimeters and the tracking detectors (Sirunyan et al., 2017). The new GNN algorithm is found to offer comparable performance for charged and neutral hadrons to the existing reconstruction algorithm. At the same time, the inference time is found to scale approximately linearly with the particle multiplicity, which is promising for its ability to maintain computing costs within budget for the HL-LHC. Further improvements to this original approach are currently under study, including an event-based loss, such as the object condensation approach. Second, a complete assessment of the physics performance remains to be evaluated, including reconstruction of rare particles and other corners of the phase space. Finally, it remains to be understood how to optimize and coherently interface this with the ML-based approach proposed for tasks downstream and upstream in the particle-level reconstruction.

2.1.2. Event Simulation

The extraction of results from LHC data relies on a detailed and precise simulation of the physics of proton-proton collisions and of the response of the detector. In fact, the collected data are typically compared to a reference model, representing the current knowledge, in order to either confirm or disprove it. Numerical models, based on Monte Carlo (MC) methods, are used to simulate the interaction between elementary particles and matter, while the Geant4 toolkit is employed to simulate the detectors. These simulations are generally very CPU intensive and require roughly half of the experiment's computing resources, with this fraction expected to increase significantly for the HL-LHC.

Novel computational methods based on ML are being explored so as to perform precise modeling from particle interactions to detector readouts and response while maintaining feasible computing budgets for HL-LHC. In particular, numerous works have focused on the usage of generative adversarial networks or other state-of-the-art generative models to replace computationally intensive fragments of MC simulation, such as modeling of electromagnetic showers (de Oliveira et al., 2017; Paganini et al., 2018a,b), reconstruction of jet images (Musella and Pandolfi, 2018) or matrix element calculations (Bendavid, 2017). In addition, the usage of ML generative models on end-to-end analysis-specific fast simulations have also been investigated in the context of Drell-Yan (Hashemi et al., 2019), dijet (Di Sipio et al., 2019), and W+jets (Chen et al., 2020) production. These case-by-case proposals serve as proof-of-principle examples for complementary data augmentation strategy for LHC experiments.

2.1.3. Heterogeneous Computing

State-of-the-art deep learning models are being explored for the compute-intensive reconstruction of each collision event at the LHC. However, their efficient deployment within the experiments' computing paradigms is still a challenge, despite the potential speed-up when the inference is executed on suitable AI-dedicated hardware. In order to gain from a parallelizable ML-based translation of traditional and mostly sequential algorithms, a heterogeneous computing architecture needs to be implemented in the experiment infrastructure. For this reason, comprehensive exploration of the use of CPU+GPU (Krupa et al., 2020) and CPU+FPGA (Duarte et al., 2019; Rankin et al., 2020) heterogeneous architectures was made to achieve the desired acceleration of deep learning inference within the data processing workflow of LHC experiments. These works demonstrated that the acceleration of machine learning inference “as a service” represents a heterogeneous computing solution for LHC experiments that potentially requires minimal modification to the current computing model.

In this approach, the ML algorithms are transferred to a co-processor on an independent (local or remote) server by reconfiguring the CPU node to communicate with it through asynchronous and non-blocking inference requests. With the inference task offloaded on demand to the server, the CPU can be dedicated to performing other necessary tasks within the event. As one server can serve many CPUs, this approach has the advantage of increasing the hardware cost-effectiveness to achieve the same throughput when comparing it to a direct-connection paradigm. It also facilitates the integration and scalability of different types of co-processor devices, where the best one is chosen for each task.

Finally, existing open-source frameworks that have been optimized for fast DL on several different types of hardware can be exploited for a quick adaptation to LHC computing. In particular, one could use the Nvidia Triton Inference Server within a custom framework, so-called Services for Optimized Network Inference on Co-processors (SONIC), to enable remote gRPC calls to either GPUs or FPGAs within the experimental software, which then only has to handle the input and output conversion between event data format and inference server format. The integration of this approach within the CMS reconstruction software has been shown to lead to a significant overall reduction in the computing demands both at the HLT and offline.

2.1.4. Real-Time Analysis at 40 MHz

Bringing deep learning algorithms to the Level-1 hardware trigger is an extremely challenging task due to the strict latency requirement and the resource constraints imposed by the system. Depending on which part of the system an algorithm is designed to run on, a latency down to O(10) ns might be required. With O(100)  processors running large-capacity FPGAs, processing thousands of algorithms in parallel, dedicated FPGA-implementations are needed to make ML algorithms as resource-efficient and fast as possible. To facilitate the design process and subsequent deployment of highly parallel, highly compressed ML algorithms on FPGAs, dedicated open-source libraries have been developed: hls4ml and Conifer. The former, hls4ml, provides conversion tools for deep neural networks, while Conifer aids the deployment of Boosted Decision Trees (BDTs) on FPGAs. Both libraries, as well as example LHC applications, will be described in the following.

The hls4ml library (Duarte et al., 2018; Coelho et al., 2020; Loncar et al., 2020; Aarrestad et al., 2021) converts pre-trained ML models into ultra low-latency FPGA or ASIC firmware with little overhead required. Integration with the Google QKeras library (Coelho, 2019) allows users to design aggressively quantized deep neural networks and train them quantization-aware (Coelho et al., 2020) down to 1 or 2 bits for weights and activations (Loncar et al., 2020). This step results in highly resource-efficient equivalents of the original model, sacrificing little to no accuracy in the process. The goal of this joint package is to provide a simple two-step approach going from a pre-trained floating point model to FPGA firmware. The hls4ml library currently provides support for several commonly used neural network layers like fully connected, convolutional, batch normalization, pooling, as well as several activation functions. These implementations are already sufficient to provide support for the most common architectures envisioned for deployment at L1.

Some first examples of machine learning models designed for the L1 trigger are based on fully connected layers, and they are proposed for tasks such as the reconstruction and calibration of final objects or lower-level inputs like trajectories, vertices, and calorimeter clusters (CERN, 2020). One example of a convolutional NN (CNN) architecture targeting the L1 trigger is a dedicated algorithm for the identification of long-lived particles (Alimena et al., 2020). Here, an attempt is made to efficiently identify showers from displaced particles in a high-granularity forward calorimeter. The algorithm is demonstrated to be highly efficient down to low energies while operating at a low trigger rate. Traditionally, cut-based selection algorithms have been used for these purposes, in order to meet the limited latency- and resource budget. However, with the advent of tools like hls4ml and QKeras, ML alternatives are being explored to improve the sensitivity to such physics processes while maintaining latency and resources in the available budget.

More recently, (variational) auto-encoders (VAEs or AEs) are being considered for the detection of “anomalous” collision events, i.e., events that are not produced by standard physics processes but that could be due instead to unexpected processes not yet explored at colliders. Such algorithms have been proposed for both the incoming LHC run starting in 2022 as well as for the future high-luminosity runs where more granular information will be available. The common approach uses global information about the event, including a subset of individual produced particles or final objects such as jets as well as energy sums. The algorithm trained on these inputs is then used to classify the event as anomalous if surpassing a threshold on the degree of anomaly (typically the loss function), ultimately decided upon the available bandwidth. Deploying a typical variational autoencoder is impossible in the L1-trigger since the bottleneck layer involves Gaussian random sampling. The explored solution is therefore to only deploy the encoder part of the network and do inference directly from the latent dimension. Another possibility is to deploy a simple auto-encoder with the same architecture and do inference computing the difference between output and input. However, this would require buffering a copy of the input for the duration it takes the auto-encoder to process the input. For this reason, the two methods are being considered and compared in terms of accuracy over a range of new physics processes, as well as latency and resources. Finally, another interesting aspect of the hls4ml tool is the capability for users to easily add custom layers that might serve a specific task not captured by the most common layers supported in the library. One example of this is compressed distance-weighted graph networks (Iiyama et al., 2021), where a graph network block called a GarNet layer takes as input a set of V vertices, each of which has Fin features, and returns the same set of vertices with Fout features. To keep the dimensionality of the problem at a manageable level, the input features of each vertex are encoded and aggregated at S aggregators. Message-passing is only performed between vertices and a limited set of aggregators, and not between all vertices, significantly reducing the network size. In Iiyama et al. (2021), an example task of pion and electron identification and energy regression in a 3D calorimeter is studied. A total inference latency of O(100) ns is reported, satisfying the L1 requirement of O(1) μs latency. The critical resource is digital signal processing (DSP) units, where 29% of the DSPs are in use by the algorithm. This can be further reduced by taking advantage of quantization-aware training with QKeras. Another example of a GNN architecture implemented on FPGA hardware using hls4ml is presented in Heintz et al. (2020). This work shows that a compressed GNN can be deployed on FPGA hardware within the latency and resources required by L1 trigger system for the challenging task of reconstructing the trajectory of charged particles.

In many cases, the task to be performed is simple enough that a boosted decision tree (BDT) architecture suffices to solve the problem. As of today, BDTs are still the most commonly used ML algorithm for LHC experiments. To simplify the deployment of these, the library Conifer (Summers et al., 2020) has been developed. In Conifer, the BDT implementation targets extreme low latency inference by executing all trees, and all decisions within each tree, in parallel. BDTs and random forests can be converted from scikit-learn (Pedregosa et al., 2011), XGBoost (Chen and Guestrin, 2016), and TMVA (Therhaag and Team, 2012), with support for more BDT training libraries planned. For a large part of the field, though, the frameworks that are currently supported are the most widely used.

There are several ongoing projects at LHC which plan to deploy BDTs in the Level-1 trigger using Conifer. One example is a BDT designed to provide an estimate of the track quality, by learning to identify tracks that are reconstructed in error, and do not originate from a real particle (Savard, 2020).

While the accuracy and resource usage are similar between a BDT and a DNN, the latency is significantly reduced for a BDT architecture. The algorithm is planned to be implemented in the CMS Experiment for the data-taking period beginning in 2022.

Rather than relying on open source libraries such as hls4ml or Conifer, which are based on high-level synthesis tools from FPGA vendors, other approaches are being considered based directly on hardware description languages, such as VHDL (Nottbeck et al., 2019; Fritzsche, 2020). One example is the application of ML for the real-time signal processing of the ATLAS Liquid Argon calorimeter (ATL, 1996). It has been shown that with upgraded capabilities for the HL-LHC collision environment the conventional signal processing, which applies an optimal filtering algorithm (Cleland and Stern, 1994), will lose its performance due to the increase of overlapping signals. More sophisticated DL methods have been found to be more suitable to cope with these challenges being able to maintain high signal detection efficiency and energy reconstruction. More specifically, studies based on simulation (Madysa, 2019) of dilated convolutional neural networks showed promising results. An implementation of this architecture for FPGA is designed using VHDL (Fritzsche, 2020) to meet the strict requirements on latency and resources required by the L1 trigger system. The firmware runs with a multiple of the bunch crossing frequency to reuse hardware resources by implementing time-division multiplexing while using pipeline stages, the maximum frequency can be increased. Furthermore, DSPs are chained up to perform the MAC operation in between two layers efficiently. In this way, a core frequency of more than 480 MHz could be reached, corresponding to 12 times the bunch crossing frequency.

2.1.5. Bringing ML to Detector Front-End

While LHC detectors grow in complexity to meet the challenging conditions of higher-luminosity environments, growing data rates prohibit transmission of full event images off-detector for analysis by conventional FPGA-based trigger systems. As a consequence, event data must be compressed on-detector in low-power, radiation-hard ASICs while sacrificing minimal physics information.

Traditionally this has been accomplished by simple algorithms, such as grouping nearby sensors together so that only these summed “super-cells” are transmitted, sacrificing the fine segmentation of the detector. Recently, an autoencoder-based approach has been proposed, relying instead on a set of machine-learned radiation patterns to more efficiently encode the complete calorimeter image via a CNN. Targeting the CMS high-granularity endcap calorimeter (HGCal) (CMS Collaboration, 2017) at the HL-LHC, the algorithm aims to achieve higher-fidelity electromagnetic and hadronic showers, critical for accurate particle identification.

The on-detector environment (the ECON-T concentrator ASIC; CMS Collaboration, 2017) demands a highly-efficient CNN implementation; a compact design should be thoroughly optimized for limited-precision calculations via quantization-aware training tools (Coelho et al., 2021). Further, to automate the design, optimization, and validation of the complex NN circuit, HLS-based tool flows (Duarte et al., 2018) may be adapted to target the ASIC form factor. Finally, as the front-end ASIC cannot be completely reprogrammed in the manner of an FPGA, a mature NN design is required from the time of initial fabrication. However, adaptability to changing run conditions and experimental priorities over the lifetime of the experiment motivate the implementation of all NN weights as configurable registers accessible via the chip's slow-control interface.

2.2. High Intensity Accelerator Experiments

2.2.1. ML-Based Trigger System at the Belle II Experiment

Context: The Belle II experiment in Japan (Abe et al., 2010; Altmannshofer et al., 2019) is engaged in the search for physics phenomena that cannot be explained by the Standard Model. Electrons and positrons are accelerated at the SuperKEKB particle accelerator to collide at the interaction point located inside of the Belle II detector. The resulting decay products are continually measured by the detector's heterogeneous sensor composition. The resulting data is then stored offline for detailed analysis.

Challenges: Due to the increasing luminosity (target luminosity is 8 × 1035cm−2s−1) most of the recorded data is from unwanted but unavoidable background reactions, rather than electron-positron annihilation at the interaction point. Not only is storing all the data inefficient due to the high background rates, but it is also not feasible to build an infrastructure that stores all the generated data. A multilevel trigger system is used as a solution to decide online which recorded events are to be stored.

Existing and Planned Work: The Neural Network z-Vertex Trigger (NNT) described used at Belle II is a deadtime-free level 1 (L1) trigger that identifies particles by estimating their origin along the beampipe. For the whole L1 trigger process, from data readout to the decision, a real-time 5μs time budget is given to avoid dead-time (Lai et al., 2020b). Due to the time cost of data pre-processing and transmission, the NNT needs to provide a decision within 300 ns processing time.

The task of the NNT is to estimate the origin of a particle track so that it can be decided whether it originates from the interaction point or not. For this purpose, a multilayer perceptron (MLP) implemented on a Xilinx Virtex 6 XC6VHX380T FPGA is used. The MLP consists of three layers with 27 input neurons, 81 hidden layer neurons and two output neurons. Data from the Belle II's central drift chamber (CDC) is used for this task, since it is dedicated to the detection of particle tracks. Before being processed by the network, the raw detector data is first combined into a 2D track based on so-called track segments, which are groupings of adjacent active sense wires. The output of the NNT delivers the origin of the track in z, along the beampipe, as well as the polar angle θ. With the help of the z-vertex, the downstream global decision logic (GDL) can decide whether a track is from the interaction point or not. In addition, the particle momentum can be detected using the polar angle θ (Baehr et al., 2019).

The networks used in the NNT are trained offline. The first networks were trained with plain simulated data because no experimental data were available. For more recent networks, reconstructed tracks from the experimental data are used. For the training the iRPROP algorithm is used which is an extension of the RPROP backpropagation algorithm. Current results show a good correlation between the NNT tracks and reconstructed tracks. Since the event rate and the background noise are currently still tolerable, the z-cut, i.e., the allowed estimated origin of a track origin in order to be kept, is chosen at ±40 cm. With increasing luminosity and the associated increasing background, this z-cut can be tightened. Since the new Virtex Ultrascale based universal trigger board (UT4) is available for the NNT this year, an extension of the data preprocessing is planned. This will be done by a 3D Hough transformation for further efficiency increases. It has already been shown in simulation that a more accurate resolution and larger solid angle coverage can be achieved (Skambraks et al., 2020).

2.2.2. Mu2e

Context: The Mu2e experiment at Fermilab (Bartoszek et al., 2014) will search for the charged lepton flavor violating process of neutrino-less μ → e coherent conversion in the field of an aluminum nucleus. About 7·1017 muons, provided by a dedicated muon beamline in construction at Fermilab, will be stopped in 3 years in the aluminum target. The corresponding single event sensitivity will be 2.5·10−17. To detect the signal e (p = 105 MeV), Mu2e uses a detector system made of a straw-tube tracker and a crystal electromagnetic calorimeter (Pezzullo, 2017).

Challenges: The trigger system is based on detector Read Out Controllers (ROCs) which stream out continuously the data, zero-suppressed, to the Data Transfer Controller units (DTCs). The proton pulses are delivered at a rate of about 600 kHz and a duty cycle of about 30% (0.4 s out of 1.4 s of the booster-ring delivery period). Each proton pulse is considered a single event, with the data from each event then grouped at a single server using a 10 Gbps Ethernet switch. Then, the online reconstruction of the events starts and makes a trigger decision. The trigger system needs to satisfy the following requirements: (1) provide efficiency better than 90% for the signals; (2) keep the trigger rate below a few kHz – equivalent to 7 Pb/year; (3) achieve a processing time < 5 ms/event. Our main physics triggers use the information of the reconstructed tracks to make the final decision.

Existing and Planned Work: The current strategy is to perform the helix pattern recognition and the track reconstruction with the CPUs of the DAQ servers, but so far this design showed limitations in matching the required timing performance (Pezzullo, 2020). Another idea that the collaboration started exploring is to perform the early stage of the track reconstruction on the ROC and DTC FPGA using the High Level Synthesis tool (HLS) and the hls4ml (Pierini et al., 2020) package. The Mu2e helix pattern-recognition algorithms (Pezzullo, 2020) are a natural fit for these tools for several reasons: they use neural-networks to clean up the recorded straw-hits from hits by low-momentum electrons (p < 10 MeV) and they perform large combinatorics calculations when reconstructing the helicoidal electron trajectory. This R&D is particularly important for the design of the trigger system of the planned upgrade of Mu2e (Abusalma et al., 2018), where we expect to: (i) increase the beam intensity by at least a factor of 10, (ii) increase the duty cycle to at least 90%, and (iii) increase the number of detector's channels to cope with the increased occupancy.

2.3. Materials Discovery

2.3.1. Materials Synthesis

Context: Advances in electronics, transportation, healthcare, and buildings require the synthesis of materials with controlled synthesis-structure-property relationships. To achieve application-specific performance metrics, it is common to design and engineer materials with highly ordered structures. This directive has led to a boom in non-equilibrium materials synthesis techniques. Most exciting are additive synthesis and manufacturing techniques, for example, 3d-printing (Visser et al., 2015; Parekh et al., 2016; Zarek et al., 2016; Ligon et al., 2017; Wang et al., 2020c) and thin film deposition (Richter, 1990; Chrisey and Hubler, 1994; Kelly and Arnell, 2000; Yoshino et al., 2000; Park and Sudarshan, 2001; George, 2010; Marvel et al., 2013), where complex nanoscale architectures of materials can be fabricated. To glean insight into synthesis dynamics, there has been a trend to include in situ diagnostics to observe synthesis dynamics (Egelhoff and Jacob, 1989; Thomas, 1999; Langereis et al., 2007; Ojeda-G-P et al., 2017). There is less emphasis on automating the downstream analysis to turn data into actionable information that can detect anomalies in synthesis, guide experimentation, or enable closed-loop control. Part of the challenge with automating analysis pipelines for in situ diagnostics is the highly variable nature and multimodality of the measurements and the sensors. A system might measure many time-resolved state variables (time-series) at various locations (e.g., temperature, pressure, energy, flow rate, etc.) (Hansen et al., 1999). Additionally, it is common to measure time-resolved spectroscopic signals (spectrograms) that provide, for instance, information about the dynamics of the chemistry and energetic distributions of the materials being synthesized (Dauchot et al., 1995; Aubriet et al., 2002; Cooks and Yan, 2018; Termopoli et al., 2019). Furthermore, there are a growing number of techniques that leverage high-speed temporally-resolved imaging to observe synthesis dynamics (Trigub et al., 2017; Ojeda-G-P et al., 2018).

Challenges: Experimental synthesis tools and in situ diagnostic instrumentation are generally semi-custom instruments provided by commercial vendors. Many of these vendors rely on proprietary software to differentiate their products from their competition. In turn, the closed-nature of these tools and even data schemas makes it hard to utilize these tools fully. The varied nature and suppliers for sensors compounds this challenge. Integration and synchronization of multiple sensing modalities require a custom software solution. However, there is a catch-22 because the software does not yet exist. Researchers cannot be ensured that the development of analysis pipelines will contribute to their ultimate goal to discover new materials or synthesize materials with increased fecundity. Furthermore, there are significant workforce challenges as most curriculums emphasize Edisonian rather than computational methods in the design of synthesis. There is an urgent need for multilingual trainees fluent in typically disparate fields.

Existing and Planned Work: Recently, the materials science community has started to embrace machine learning to accelerate scientific discovery (Ramprasad et al., 2017; Butler et al., 2018; Schmidt et al., 2019). However, there have been growing pains. The ability to create highly overparameterized models to solve problems with limited data provides a false sense of efficacy without the generalization required for science. Machine learning model architectures designed for natural time-series and images are ill-posed for physical processes governed by equations. In this regard, there is a growing body of work to embed physics in machine learning models, which serve as the ultimate regularizers. For instance, rotational (Kalinin et al., 2020; Oxley et al., 2020) and Euclidean equivariance (Smidt, 2020; Smidt et al., 2021) has been built into the model architectures, and methods to learn sparse representations of underlying governing equations have been developed (Champion et al., 2019; de Silva et al., 2020; Kaheman et al., 2020).

Another challenge is that real systems have system-specific discrepancies that need to be compensated (Kaheman et al., 2019). For example, a precursor from a different batch might have a slightly different viscosity that needs to be considered. There is an urgent need to develop these foundational methods for materials synthesis. Complementing these foundational studies, there has been a growing body of literature emphasizing post-mortem machine-learning-based analysis of in situ spectroscopies (Trejo et al., 2019; Provence et al., 2020). As these concepts become more mature, there will be an increasing emphasis on codesign of synthesis systems, machine learning methods, and hardware for on-the-fly analysis and control. This effort toward self-driving laboratories is already underway in wet-chemical synthesis where there are minimal dynamics, and thus, latencies are not a factor (Langner et al., 2020; MacLeod et al., 2020). Future efforts will undoubtedly focus on controlling dynamic synthesis processes where millisecond-to-nanosecond latencies are required.

2.3.2. Scanning Probe Microscopy

Context: Touch is the first sense humans develop. Since the atomic force microscope's (AFM) invention in 1985 (Binnig et al., 1986), humans have been able to “feel”İ surfaces with atomic level resolution with pN sensitivity. AFMs rely on bringing an atomically sharp tip mounted on a cantilever into contact with a surface. By scanning this tip nanometer-to-atomically resolved images can be constructed by measuring the angular deflection of a laser bounced off the cantilever. This detection mechanism provides high-precision sub-angstrom measures of displacement.

By adding functionality to the probe (e.g., electrical conductivity Benstetter et al., 2009, resistive heaters King, 2005, single-molecule probes Oberhauser et al., 2002, and N-V centers Ariyaratne et al., 2018), scanning probe microscopy (SPM) can measure nanoscale functional properties, including electrical conductivity (Gómez-Navarro et al., 2005; Seidel et al., 2010), piezoresponse (Jesse and Kalinin, 2011), electrochemical response (Jesse et al., 2012), magnetic force (Kazakova et al., 2019), magnetometry (Casola et al., 2018), and much more. These techniques have been expanded to include dynamics measurements during a tip-induced perturbation that drives a structural transformation. These methods have led to a boom in new AFM techniques, including fast-force microscopy (Benaglia et al., 2018), current-voltage spectroscopies (Holstad et al., 2020), band-excitation-based spectroscopies (Jesse et al., 2018), and full-acquisition mode spectroscopies (Somnath et al., 2015). What has emerged is a data deluge where these techniques are either underutilized or under-analyzed.

Challenges: The key practical challenge is that it takes on days-to-weeks to analyze data from a single measurement properly. As a result, experimentalists have little information on how to design their experiments. There is even minimal feedback on whether the experiments have artifacts (e.g., tip damage) that would render the results unusable. The number of costly failed experiments is a strong deterrent to conducting advanced scanning probe spectroscopies and developing even more sophisticated imaging techniques. There is a significant challenge in both the acceleration and automation of analysis pipelines.

Existing and Planned Work: In materials science, scanning probe microscopy has quickly adopted machine learning. Techniques for linear and nonlinear spectral unmixing provide rapid visualization and extraction of information from these datasets to discover and unravel physical mechanisms (Collins et al., 2020a,b; Ziatdinov et al., 2020; Kalinin et al., 2021). The ease of applying these techniques has led to justified concerns about the overinterpretation of results and overextension of linear models (Griffin et al., 2020) to highly nonlinear systems. More recently, long-short term memory autoencoders were controlled to have non-negative and sparse latent spaces for spectral unmixing. By traversing the learned latent space, it has been possible to draw complex structure-property relationships (Agar et al., 2019; Holstad et al., 2020). There are significant opportunities to accelerate the computational pipeline such that information can be extracted on practically relevant time scales by the experimentalist on the microscope.

Due to the high velocity of data, up to GB/s, with sample rates of 100,000 spectra, extracting even cursory information will require the confluence of data-driven models, physics-informed machine learning, and AI hardware. As a tangible example, in band-excitation piezoresponse force microscopy, the frequency-dependent cantilever response is measured at rates up to 2,000 spectra-per-second. Extracting the parameters from these measurements requires fitting the response to an empirical model. Using least-squares fitting throughput is limited to ~50-fits/core-minute, but neural networks provide an opportunity to accelerate analysis and better handle noisy data (Borodinov et al., 2019). There is an opportunity to deploy neural networks on GPU or FPGA hardware accelerators to approximate and accelerate this pipeline by orders of magnitude.

2.4. Fermilab Accelerator Controls

Context: The Fermi National Accelerator Laboratory (Fermilab) is dedicated to investigating matter, energy, space, and time (Fermilab, 2021). For over 50 years, Fermilab's primary tool for probing the most elementary nature of matter has been its vast accelerator complex. Spanning a number of miles of tunnels, the accelerator complex is actually multiple accelerators and beam transport lines each representing different accelerator techniques and eras of accelerator technologies. In its long history, Fermilab's accelerator complex has had to adapt to the mission, asking more of the accelerators than they were designed for and often for purposes they were never intended. This often resulted in layering new controls on top of existing antiquated hardware. Until recently, accelerator controls focused mainly on providing tools and data to the machine operators and experts for tuning and optimization. Having recognized the future inadequacies of the current control system and the promise of new technologies such as ML, the Fermilab accelerator control system will be largely overhauled in the coming years as part of the Accelerator Controls Operations Research Network (ACORN) project (Fermilab, 2021).

Challenges: The accelerator complex brings unique challenges for machine learning. Particle accelerators are immensely complicated machines, each consisting of many thousands of variable components and even larger data sources. Their large size and differing types, resolution, and frequency of data mean collecting and synchronizing data is difficult. Also, as one might imagine, control and regulation of beams that travel at near light speeds is always a challenge. Maintaining and upgrading the accelerator complex controls is costly. For this reason, much of the accelerator complex is a mixture of obsolete, new and cutting edge hardware.

Existing and Planned Work: Traditional accelerator controls have focused on grouping like elements so that particular aspects of the beam can be tuned independently. However, many elements are not always completely separable. Magnets, for example, often have higher-order fields that affect the beam in different ways than is the primary intent. Machine learning has made it finally possible to combine previously believed to be unrelated readings and beam control elements into new novel control and regulation schemes.

One such novel regulation project is underway for the Booster Gradient Magnet Power Supply (GMPS). GMPS controls the primary trajectory of the beam in the Booster (OPE, 2021). The project hopes to increase the regulation precision of GMPS ten-fold. When complete, GMPS would be the first FPGA online ML-model-based regulation system in the Fermilab accelerator complex (John et al., 2021). The promise of ML for accelerator controls is so apparent to the Department of Energy that a call for accelerator controls using ML was made to the national labs (DOE, 2020). Of the two proposals submitted by Fermilab and approved by the DOE is the Real-time Edge AI for Distributed Systems (READS) project. READS is actually two projects. The first READS project will create a complimentary ML regulation system for slow extraction from the Delivery Ring to the future Mu2e experiment (Bartoszek et al., 2015). The second READS project will tackle a long-standing problem with de-blending beam losses in the Main Injector (MI) enclosure. The MI enclosure houses two accelerators, the MI and the Recycler. During normal operation, high intensity beams exist in both machines. One to use ML to help regulate slow spill in the Delivery ring to Mu2e, and another to develop a real-time online model to de-blend losses coming from the Recycler and Main Injector accelerators which share an enclosure. Both READS projects will make use of FPGA online ML models for inference and will collect data at low latencies from distributed systems around the accelerator complex (Seiya et al., 2021).

2.5. Neutrino and Direct Dark Matter Experiments

2.5.1. Accelerator Neutrino Experiments

Context: Accelerator neutrino experiments detect neutrinos with energies ranging from a few tens of MeV up to about 20 GeV. The detectors can be anywhere from tens of meters away from the neutrino production source, to as far as away as 1,500 km. For experiments with longer baselines it is common for experiments to consist of both a near (~1 km baseline) and a more distant far detector (100'skm baseline). Accelerator neutrino experiments focused on long-baseline oscillations use highly pure muon neutrino beams, produced by pion decays in flight. By using a system of magnetic horns it is possible to produce either a neutrino, or antineutrino beam. This ability is particularly useful for CP-violation measurements. Other experiments use pions decaying at rest, which produce both muon and electron flavors.

The primary research goal of many accelerator neutrino experiments is to perform neutrino oscillation measurements; the process by which neutrinos created in one flavor state are observed interacting as different flavor states after traveling a given distance. Often this takes the form of measuring electron neutrino appearance and muon neutrino disappearance. The rate of oscillation is energy-dependent, and so highly accurate energy estimation is essential. Another key research goal for accelerator neutrinos is to measure neutrino cross-sections, which in addition to accurate energy estimation requires the identification of the particles produced by the neutrino interaction.

Challenges: Accelerator neutrino experiments employ a variety of detector technologies. These range from scintillator detectors such as NOvA (Ayres et al., 2007) (liquid), MINOS (Ambats et al., 1998) (solid), and MINERvA (MIN, 2006) (solid), to water Cherenkov detectors such as T2K (Abe et al., 2011), and finally liquid argon time projection chambers such as MicroBooNE (Fleming, 2012), ICARUS (Amerio et al., 2004), and DUNE (Abi et al., 2020a). Pion decay-at-rest experiments (COHERENT Akimov et al., 2015, JSNS2 Ajimura et al., 2017) use yet different technologies (liquid and solid scintillators, as well as solid-state detectors). The individual challenges and solutions are unique to each experiment, though common themes do emerge.

Neutrino interactions are fairly uncommon due to their low cross-section. Some experiments can see as few as one neutrino interaction per day. This, combined with many detectors being close to the surface, means that analyses have to be highly efficient whilst achieving excellent background rejection. This is true both in online data taking and offline data analysis.

As experiments typically have very good temporal and/or spatial resolution it is often fairly trivial to isolate entire neutrino interactions. This means that it is then possible to use image recognition tools such as CNNs to perform classification tasks. As a result, many experiments initially utilized variants of GoogLeNet, though many are now transitioning to use GNNs and networks better able to identify sparse images.

Existing and Planned Work: As discussed in Section 2.5.2, DUNE will use machine learning in its triggering framework to handle its immense data rates and to identify candidate interactions, for both traditional neutrino oscillation measurements and for candidate solar and supernova events. Accelerator neutrino experiments have successfully implemented machine learning techniques for a number of years, the first such example being in 2017 (Adamson et al., 2017), where the network increased the effective exposure of the analysis by 30%. Networks aimed at performing event classification are common across many experiments, with DUNE having recently published a network capable of exceeding its design sensitivity on simulated data and which includes outputs that count the numbers of final state particles from the interaction (Abi et al., 2020a).

Experiments are becoming increasingly cognizant of the dangers of networks learning features of the training data beyond what is intended. For this reason, it is essential to carefully construct training datasets such that this risk is reduced. However, it is not possible to correct or quantify bias which is not yet known; therefore the MINERvA experiment has explored the use of a domain adversarial neural network (Perdue et al., 2018) to reduce unknown biases from differences in simulated and real data. The network features a gradient reversal layer in the domain network (trained on data), thus discouraging the classification network (trained on simulation) to learn from any features that behave differently between the two domains. A more robust exploration of the machine learning applied to accelerator neutrino experiments can be found here in Psihas et al. (2020).

2.5.2. Neutrino Astrophysics

Context: Neutrino astrophysics spans a wide range of energies, with neutrinos emitted from both steady-state and transient sources with energies from less than MeV to EeV scale. Observations of astrophysical neutrinos are valuable both for the understanding of neutrino sources and for probing fundamental physics. Neutrino detectors designed for observing these tend to be huge scale (kilotons to megatons). Existing detectors involve a diverse range of materials and technologies for particle detection; they include Cherenkov radiation detectors in water and ice, liquid scintillator detectors and, liquid argon time projection chambers.

Astrophysical neutrinos are one kind of messenger contributing to the thriving field of multimessenger astronomy, in which signals from neutrinos, charged particles, gravitational waves, and photons spanning the electromagnetic spectrum are observed in coincidence. This field has had some recent spectacular successes (Abbott et al., 2017a; Aartsen et al., 2018; Graham et al., 2020). For multimessenger transient astronomy, time is of the essence for sharing data and locating sources. Directional information from the neutrinos is critically valuable, to allow prompt location of the source by other messengers.

Potential interesting transient astrophysical sources include sources of ultra-high energy neutrinos, as well as nearby stellar core collapses. Neutrinos in the multi-GeV and higher range are emitted from distant cosmic sources, including kilonovae and blazars, and cubic-km-scale water-based Cherenkov detectors such as IceCube at the South Pole can produce fast alerts from single neutrino observations.

Core-collapse supernovae are another promising use case for fast machine learning. These are copious sources of few tens of MeV-scale neutrinos, which are emitted in a burst lasting a few tens of seconds (Scholberg, 2012; Mirizzi et al., 2016). The neutrinos are prompt after core collapse (as will be gravitational waves) but observable electromagnetic radiation will not emerge for anywhere from tens to 106s, depending on the nature of the progenitor and its envelope (Kistler et al., 2013). Low-latency information is therefore immensely valuable. Core-collapse supernovae are rare events within the distance range observable by current and near-future neutrino detectors. They occur only every several decades, which makes prompt and robust detection especially important. The SuperNova Early Warning System (Antonioli et al., 2004; Al Kharusi et al., 2020) aims to provide a prompt alert from a coincidence of burst detections. However, pointing information from neutrinos is relatively difficult to extract promptly. Detectors with the capability for prompt pointing thanks to the anisotropy of neutrino interactions (i.e., the interaction products that remember where the neutrino came from) offer the best prospects, but these need to be able to select neutrino events from background and reconstruct their directions with very low latency.

Presupernova neutrinos are another interesting possibility. In the final stages of stellar burning, one expects a characteristic uptick in neutrino luminosity and average energy, producing observable events in detectors for nearby progenitors. This could give a warning of hours or perhaps days before core collapse for the nearest progenitors. For this case, fast selection of neutrino-like events and reconstruction of their directional information for background reduction is needed.

Challenges: The challenges, in general, are fast selection and reconstruction of neutrino event (interaction) information. The specifics of the problem depend on the particular detector technology, but in general, the charged particle products of a neutrino interaction will have a distinctive topology or other signature and must be selected from a background of cosmic rays, radiologicals, or detector noise. Taking as an example a liquid argon time projection chamber like the Deep Underground Neutrino Experiment (DUNE), neutrino-induced charged particles produce charge and light signals in liquid argon. Supernova neutrino interactions appear as small (tens of cm spatial scale) stubs and blips (Abi, 2020; Abi et al., 2020b). The recorded neutrino event information from the burst can be used to reconstruct the supernova direction to ~5–10° for core collapse at 10kpc distance (Abi, 2020; Roeth, A. J., 2020). The neutrino events need to be selected from a background of radioactivity and cosmogenics, as well as detector noise, requiring background reduction of many orders of magnitude. Total data rate amounts to ~40Tb/s. The detector must take data for a decade or more at this rate, with near-continuous uptime.

For steady-state signals such as solar neutrinos, triggering on individual events in the presence of large backgrounds is a challenge that can be addressed with machine learning. For burst signals, the triggering is a different problem: the general strategy is to read out all information on every channel within a tens-of-seconds time window, for the case of a triggered burst. This leads to the subsequent problem of sifting the signal events and reconstructing sufficient information on a very short timescale to point back to the supernova. The required timescale is minutes, or preferably seconds. Both the event-by-event triggering and fast directional reconstruction can be addressed with fast machine learning.

Existing and Planned Work: There are a number of existing efforts toward the use of machine learning for particle reconstruction in neutrino detectors including water Cherenkov, scintillator, and liquid argon detectors. These overlap to some extent with the efforts described in Section 2.5.1. Efforts directed specifically toward real-time event selection and reconstruction are ramping up. Some examples of ongoing efforts can be found in Abi et al. (2020a), Acciarri et al. (2020), Psihas et al. (2020), Abratenko et al. (2020), Wang et al. (2020a), Drielsma et al. (2021), and Qian et al. (2021).

2.5.3. Direct Detection Dark Matter Experiments

Context: Direct dark matter (DM) search experiments take advantage of the vastly abundant DM in the universe and are searching for direct interactions of DM particles with the detector target material. The various target materials can be separated into two main categories, crystals and liquid noble gases, though other material types are subject to ongoing detector R&D efforts (Alexander et al., 2016; Schumann, 2019).

One of the most prominent particle DM candidates is the WIMP (weakly interacting massive particle), a thermal, cold DM candidate with an expected mass and coupling to Standard Model particles at the weak scale (Jungman et al., 1996). However, decades of intensive searches both at direct DM and at collider experiments have not yet been able to discover2 the vanilla WIMP while excluding most of the parameter space of the simplest WIMP hypothesis (Schumann, 2019). This instance has lead to a shift in paradigm for thermal DM toward increasingly lower masses well below 1GeV (and thus the weak scale) (Boehm and Fayet, 2004) and as low as a few keV, i.e., the warm DM limit (Weinberg et al., 2015). Thermal sub-GeV DM is also referred to as light dark matter (LDM). Other DM candidates that are being considered include non-thermal, bosonic candidates like dark photons, axions and axion-light particles (ALPs) (Holdom, 1986; Svrcek and Witten, 2006; Peccei, 2008).

The most common interactions direct DM experiments are trying to observe are thermal DM scattering off either a nucleus or an electron and the absorption of dark bosons under the emission of an electron. The corresponding signatures are either nuclear recoil or electron recoil signatures.

Challenges: In all mentioned interactions, and independent of the target material, a lower DM mass means a smaller energy deposition in the detector and thus a signal amplitude closer to the baseline noise. Typically, the baseline noise has non-Gaussian contributions that can fire a simple amplitude-over-threshold trigger even if the duration of the amplitude above threshold is taken into account. The closer the trigger threshold is to the baseline, the higher the rate of these spurious events. In experiments which cannot read out raw data continuously and which have constraints on the data throughput, the hardware-level trigger threshold has thus to be high enough to significantly suppress accidental noise triggers.

In the hunt for increasingly lower DM masses, however, an as-low-as-possible trigger threshold is highly desirable, calling for a more sophisticated and extremely efficient event classification at the hardware trigger level. Particle-induced events have a known, and generally constant, pulse-shape while non-physical noise “events" (e.g., induced by the electronics) generally have a varying pulse-shape which is not necessarily predictable. A promising approach in such a scenario is the use of machine learning techniques for most efficient noise event rejection in real-time allowing to lower the hardware-level trigger threshold, and thus the low mass reach in most to all direct DM searches, while remaining within the raw data read-out limitations imposed by the experimental set-up.

Existing and Planned Work: Machine learning is already applied by various direct DM search experiments (Simola et al., 2019; Khosa et al., 2020; Szydagis et al., 2021), especially in the context of offline data analyses. However, it is not yet used to its full potential within the direct DM search community. Activities in this regard are still ramping up but with increasing interest, efforts, and commitment. Typical offline applications to date are the reconstruction of the energy or position of an event and the classification of events (e.g., signal against noise or single-scattering against multiple-scattering). In parallel R&D has started on real-time event classification within the FPGA-level trigger architecture of the SuperCDMS experiment (Agnese et al., 2017) with the long-term goal of lowering the trigger threshold notably closer to the baseline noise without triggering on spurious events. While these efforts are being conducted within the context of SuperCDMS the goal is a modular trigger solution for easier adaption to other experiments.

2.6. Electron-Ion Collider

Context: The Electron-Ion Collider (EIC) will support the exploration of nuclear physics over a wide range of center-of-mass energies and ion species, using highly-polarized electrons to probe highly-polarized light ions and unpolarized heavy ions. The frontier accelerator facility will be designed and constructed in the U.S. over the next 10 years. The requirements of the EIC are detailed in a white paper (Accardi et al., 2016), the 2015 Nuclear Physics Long Range Plan (Aprahamian et al., 2015), and an assessment of the science by the National Academies of Science (National Academies of Sciences Engineering and Medicine, 2018). The EIC's high luminosity and highly polarized beams will push the frontiers of particle accelerator science and technology and will enable us to embark on a precision study of the nucleon and the nucleus at the scale of sea quarks and gluons, over all of the kinematic range that is relevant as described in the EIC Yellow Report (Abdul Khalek et al., 2021).

Challenges: While the event reconstruction at the EIC is likely easier than the same task at present LHC or RHIC hadron machines, and much easier than for the High-Luminosity LHC, which will start operating 2 years earlier than the EIC, possible contributions from machine backgrounds form a challenge. The expected gain in CPU performance in the next 10 years as well as the possible improvement in the reconstruction software from the use of AI and ML techniques give a considerable margin to cope with higher event complexity that may come by higher background rates. Software design and development will constitute an important ingredient for the future success of the experimental program at the EIC. Moreover, the cost of the IT related components, from software development to storage systems and to distributed complex e-Infrastructures can be raised considerably if a proper understanding and planning is not taken into account from the beginning in the design of the EIC. The planning must include AI and ML techniques, in particular for the compute-detector integration at the EIC, and training in these techniques.

Existing and Planned Work: Accessing the EIC physics of interest requires an unprecedented integration of the interaction region (IR) and detector designs. The triggerless DAQ scheme that is foreseen for the EIC will extend the highly integrated IR-detector designs to analysis. A seamless data processing from DAQ to analysis at the EIC would allow to streamline workflows, e.g., in a combined software effort for the DAQ, online, and offline analysis, as well as to utilize emerging software technologies, in particular fast ML algorithms, at all levels of data processing. This will provide an opportunity to further optimize the physics reach of the EIC. The status and prospects for “AI for Nuclear Physics” have been discussed in a workshop in 2020 (Bedaque et al., 2021). Topics related to fast ML are intelligent decisions about data storage and (near) real-time analysis. Intelligent decisions about data storage are required to ensure the relevant physics is captured. Fast ML algorithms can improve the data taken through data compactification, sophisticated triggers, and fast online analysis. At the EIC, this could include automated alignment and calibration of the detectors as well as automated data-quality monitoring. A (near) real-time analysis and feedback enables quick diagnostics and optimization of experimental setups as well as significantly faster access to physics results.

2.7. Gravitational Waves

Context: As predicted by Einstein in 1916, gravitational waves are fluctuations in the gravitational field which within the theory of general relativity manifest as a change in the spacetime metric. These ripples in the fabric of spacetime travel at the speed of light and are generated by changes in the mass quadruple moment, as, for example, in the case of two merging black holes (Abbott et al., 2016b). To detect gravitational waves, the LIGO/Virgo/KAGRA collaborations employ a network of kilometer-scale laser interferometers (Harry and LIGO Scientific Collaboration, 2010; Aso et al., 2013; Acernese et al., 2014; Affeldt et al., 2014). An interferometer consists of two perpendicular arms; as the gravitational wave passes through the instrument, it stretches one arm while compressing the other in an alternating pattern dictated by the gravitational wave itself. Such length difference is then measured from the laser interference pattern.

Gravitational waves are providing a unique way to study fundamental physics, including testing the theory of general relativity at the strong field regime, the speed of propagation and polarization of gravitational waves, the state of matter at nuclear densities, formation of black holes, effects of quantum gravity and more. They have also opened up a completely new window for observing the Universe and in a complementary way to one enabled by electromagnetic and neutrino astronomy. This includes the study of populations, including their formation and evolution, of compact objects such as binary black holes and neutron stars, establish the origin of gamma-ray bursts (GRBs), measure the expansion of the Universe independently of electromagnetic observations, and more (Abbott et al., 2017b).

Challenges: In the next observing run in 2022, LIGO, Virgo, and KAGRA will detect an increasing number of gravitational-wave candidates. This poses a computational challenge to the current detection framework, which relies on matched-filtering techniques that match parameterized waveforms (templates) from simulations into the gravitational-wave time series data (Sathyaprakash and Dhurandhar, 1991; Vaseghi, 2001; Abbott et al., 2016b). Matched filtering scales poorly as the low-frequency sensitivity of the instrument improves and the search parameter space of the gravitational wave expands to cover spin effects and low mass compact objects. To estimate the physical properties of the gravitational wave, stochastic Bayesian posterior samplers, such as Markov-chain Monte Carlo and Nested Sampling, have been used until now. Such analysis approaches can take up hours to days to complete (Abbott et al., 2016a). The latency introduced by the current search and parameter estimation pipeline is non-negligible and can hinder electromagnetic follow-ups of time-sensitive sources like binary neutron stars, supernovae, and other, yet unknown, systems.

Observations of gravitational-wave transients are also susceptible to environmental and instrumental noise. Transient noise artifacts can be misidentified as a potential source, especially when the gravitational-wave transients have an unknown morphology (e.g., supernovae, neutron star glitches). Line noise in the noise spectrum of the instruments can affect the search for continuous gravitational waves (e.g., spinning neutron stars) and stochastic gravitational waves (e.g., astrophysical background of gravitational waves from unresolved compact binary systems). These noise sources are difficult to simulate, and current noise subtraction techniques are insufficient to remove the more complex noise sources, such as non-linear and non-stationary ones.

Existing and Planned Work: In recent years, machine learning algorithms have been explored in different areas of gravitational-wave physics (Cuoco et al., 2020). CNNs have been applied to detect and categorize compact binary coalescence gravitational waves (Kim et al., 2015, 2020; Gabbard et al., 2018; George and Huerta, 2018; Gebhard et al., 2019), burst gravitational waves from core-collapse supernovae (Astone et al., 2018; Chan et al., 2020; Iess et al., 2020), and continuous gravitational waves (Dreissigacker et al., 2019; Beheshtipour and Papa, 2020). Besides, recurrent neural networks (RNNs) based autoencoders have been explored to detect gravitational wave using an unsupervised strategy (Moreno et al., 2021). FPGA-based RNNs are also explored to show the potential in low-latency detection of gravitational wave (Que et al., 2021). Applications of ML in searches of other types of gravitational waves, such as generic burst and stochastic background, are currently being explored. Moreover, probabilistic and generative ML models can be used for posterior sampling in gravitational-wave parameter estimation and achieve comparable performance to Bayesian sampler on mock data while taking significantly less time to complete (Shen et al., 2019; Chua and Vallisneri, 2020; Gabbard et al., 2020). ML algorithms are also being used to improve the gravitational-wave data quality and subtract noise. Transient noise artifacts can be identified and categorized from their time-frequency transforms and constant-Q transforms (Zevin et al., 2017; Razzano and Cuoco, 2018) or through examining hundreds of thousands of LIGO's auxiliary channels (Biswas et al., 2013). These auxiliary channels can also be used to subtract quasi-periodic noise sources (e.g., spectral lines) (Ormiston et al., 2020; Vajente et al., 2020). Although ML algorithms have shown a lot of promise in gravitational-wave data analysis, many of these algorithms are still at the proof-of-concept stage and have not yet been successfully applied in real-time analysis. Current efforts seek to create a computational infrastructure for low-latency analysis, improve the quality of the training data (e.g., expanding the parameter space, using a more realistic noise model), and better quantify the performance of these algorithms on longer stretches of data.

2.8. Biomedical Engineering

Context: We have seen an explosion of biomedical data, such as biomedical images, genomic sequences, and protein structures, due to the advances in high-resolution and high-throughput biomedical devices. AI-augmented reality-based microscopy (Chen et al., 2019) enables automatic analysis of cellular images and real-time characterization of cells. Machine learning is used in-silico prediction of fluorescent labels, label-free rare cell classification, morphology characterization, and RNA sequencing (Christiansen et al., 2018; Tang et al., 2018; Li et al., 2020a; Siu et al., 2020; Wang et al., 2020b). For in-situ cell sorting, real-time therapy response prediction, and augmented reality microscope-assisted diagnosis (Nitta et al., 2018; Chen et al., 2019; Sakellaropoulos et al., 2019), it is important to standardize and optimize data structure in deep learning models to increase speed and efficiency. Various machine-learning-based algorithms for detecting hemorrhage and lesions, accelerating diagnosis, and enhancing medical video and image quality have also been proposed in biopsy analysis and surgery assistance.

Challenges: A major challenge for clinical application of ML is inadequate training and testing data. The medical data annotation process is both time-consuming and expensive for large image and video datasets which require expert knowledge. The latency of trained models' inference also introduces computational difficulties in performing real-time diagnosis and surgical operation. The quality of services for time-critical healthcare requires less than 300 ms as real-time video communication (Shukla et al., 2019). For reaching 60 frames per second (FPS) high-quality medical video, the efficiency and performance of a deep learning model become crucial.

Existing and Planned Work: Many changes in ML algorithms have involved improvements to performance both in accuracy and inference speed. Some state-of-art machine learning models can reach a high speed for inference. For example, YOLOv3-tiny (Adarsh et al., 2020), an object detection model commonly used for medical imaging, can process images at over 200 FPS on a standard dataset with producing reasonable accuracy. Currently both GPU- and FPGA-based (Chang and Sheu, 2020; Satpathy et al., 2020; Zhang et al., 2020a), distributed networks of wireless sensors connected to cloud ML (edge computing), and 5G-high-speed-WiFi-based ML models are deployed in medical AI applications (Chen et al., 2018; Morocho-Cayamcela et al., 2019; Zhang et al., 2020c). ML models for fast diagnosis of stroke, thrombosis, colon polyps, cancer, and epilepsy have significantly reduced the time in lesion detection and clinical decision (Bagheri et al., 2019; Horie et al., 2019; Lee et al., 2020; Nafee et al., 2020; Nogueira-Rodríguez et al., 2020). Real-time AI-assisted surgery can improve perioperative workflow, perform video segmentation (Volkov et al., 2017), detection of surgical instruments (Choi et al., 2017a), and visualization of tissue deformation (Tonutti et al., 2017). High-speed ML is playing a critical role in digital health, i.e.,, remote diagnosis, surgery, and monitoring (Zhang et al., 2020c).

2.9. Health Monitoring

Context: Our habits and behaviors affect our health and wellness. Unhealthy behaviors such as smoking, consuming excessive alcohol, or medication non-adherence often has an adverse effect on our health (Klesges et al., 1989; Baker et al., 2000; Sokol et al., 2005; White and Hingson, 2013). Traditional behavior monitoring approaches relied on self-reports, which were often biased and required intense manual labor (Althubaiti, 2016). With the advent of mobile and wearable devices, it is gradually becoming possible to monitor various human behaviors automatically and unobtrusively. Over the years, researchers have either developed custom wearable hardware or have used off-the-shelf commercial devices for mobile and wearable health (mHealth) monitoring (Ali et al., 2012; Dong et al., 2012; Parate et al., 2014; Bi et al., 2018; Mishra et al., 2020; Sen et al., 2020; Zhang et al., 2020b). The automatic and unobtrusive monitoring capability of these devices makes it possible to detect, identify and monitor behaviors, including unhealthy behaviors in a free-living setting.

Challenges: There are various challenges associated with monitoring habits and behaviors using wearable devices. Firstly, these devices should be capable of monitoring unhealthy behaviors accurately, and in real-time. The occurrence of these unhealthy behaviors in a free-living setting is often sparse as compared to other behaviors and thus it is important to spot them accurately, whenever they occur. Most existing systems take an offline ML approach of detecting these unhealthy behaviors, where the ML algorithm identifies these behaviors well after they have occurred. An offline approach prevents providing interventions that can minimize unhealthy behaviors. Thus, it is necessary to develop ML approaches that can detect these behaviors online, and in real-time, so that interventions such as just-in-time adaptive interventions (JITAIs) can be delivered. Secondly, since these devices capture sensitive information, it is necessary to ensure that an individual's privacy is preserved. Privacy-preserving approaches such as locally processing the data on-device can be taken so that critical information does not leave the device. Other approaches, such as collaborative learning, aim to increase speed while preserving data privacy (Idé et al., 2019). Finally, these behaviors can occur in various heterogeneous environments and thus the health monitoring system should be agnostic to where the behavior occurs. Such monitoring requires developing multiple machine learning models for diverse environments.

Existing and Planned Work: While existing work has ventured in various directions, there is a growing need for sensing health biomarkers correctly and developing ML approaches that are fast and can accurately identify these biomarkers. Researchers have focused on developing novel sensing systems that can sense various health behaviors and biomarkers (Holz and Wang, 2017; Bui et al., 2019; Bedri et al., 2020; Chun et al., 2020; Echterhoff and Wang, 2020; Li et al., 2020b; Pham et al., 2020). Historically, most of these novel sensing techniques were tested in controlled settings, but more recently researchers are ensuring that these systems can work seamlessly in free-living settings as well. This often requires developing multiple ML models, each catering to a specific context and environment. A new trend in this field has started relying on implementing models that can be implemented on-device and are both quick and accurate in detecting these behaviors. In addition to providing real-time interventions (Thomas and Bond, 2015; Nahum-Shani et al., 2018), on-device monitoring of these behaviors can reduce privacy concerns (Sadek et al., 2019). However, since wearable devices themselves might not be capable of processing the data, federated machine learning approaches are also being explored recently by several researchers (Rieke et al., 2020).

2.10. Cosmology

Context: Cosmology is the study of the Universe's origin (big bang), evolution, and future (ultimate fate). The large-scale dynamics of the universe are governed by gravity, where dark matter plays an important role, and the accelerating expansion rate of the universe itself, caused by the so-called dark energy. A non-exhaustive list of cosmological probes includes type Ia supernovae (Riess et al., 1998; Perlmutter et al., 1999; Betoule et al., 2014; Scolnic et al., 2018; Abbott et al., 2019b), cosmic microwave background (Fixsen et al., 1996; Spergel et al., 2003; Komatsu et al., 2011; Planck Collaboration et al., 2016, 2020), large-scale structures (including baryon acoustic oscillation) (Eisenstein et al., 2005; Percival et al., 2010; Delubac et al., 2015; Abbott et al., 2019a), gravitational lensing (Bacon et al., 2000, 2003; Collett and Auger, 2014; Suyu et al., 2017; Heymans et al., 2020) and 21 cm cosmology (McQuinn et al., 2007; Pritchard and Loeb, 2012; Maartens et al., 2015; Beardsley et al., 2016).

Challenges: As astronomy is approaching the big data era with next-generation facilities, such as the Nancy Grace Roman Space telescope (Sanderson et al., 2019), Vera C. Rubin Observatory (Ivezić et al., 2019), and Euclid telescope (Amiaux et al., 2012), the uncertainty budget in the estimation of cosmological parameters is no longer expected to be dominated by statistical uncertainties, but rather by systematic ones; understanding such uncertainties can lead to attaining sub-percent precision. On the other hand, the immense stream of astronomical images will be impossible to analyze in a standard fashion (by human interaction); new automated methods are needed to extract valuable pieces of cosmological data.

Existing and Future Work: Current efforts are focused on applying ML techniques to study the influence of systematic biases on available analysis methods (e.g., for purposes of fitting or modeling) or on developing new methods to overcome present limitations; for example CNNs can be adapted to spherical surfaces to generate more accurate models when producing weak lensing maps (Perraudin et al., 2019), or to remove noise from cosmic microwave background maps (Petroff et al., 2020). In addition, discovery and classification engines are being developed to extract useful cosmological data from next-generation facilities (Narayan et al., 2018; Mahabal et al., 2019; Förster et al., 2020; Möller et al., 2020). Furthermore, ML is also being used in cosmological simulations to test new analyses and methods and to set the foundations for the first operation of such new facilities (Kamdar et al., 2016; Rodríguez et al., 2018; Villaescusa-Navarro et al., 2020). An extensive list of published ML applications in cosmology can be found in Stein (2020).

2.11. Plasma Physics

Context: The focus of this description is on the Plasma Physics/Fusion Energy Science domain with regard to the major system constraints encountered for existing and expected algorithms and data representations when dealing with the challenge of delivering accelerated progress in AI—enabled deep machine learning prediction and control of magnetically-confined thermonuclear plasmas. Associated techniques have enabled new avenues of data-driven discovery in the quest to deliver fusion energy—identified by the 2015 CNN “Moonshots for the twenty-first Century” televised series as one of 5 prominent grand challenges for the world today.

Challenges: An especially time-urgent and challenging problem is the need to reliably predict and avoid large-scale major disruptions in “tokamak systems” such as the EUROFUSION Joint European Torus (JET) today and the burning plasma ITER device in the near future—a ground-breaking $25B international burning plasma experiment with the potential capability to exceed “breakeven” fusion power by a factor of 10 or more with “first plasma”İ targeted for 2026 in France. The associated requirement is for real-time plasma forecasting with control capabilities operative during the temporal evolution of the plasma state well before the arrival of damaging disruptive events. High-level supervisory control of many lower-level control loops via actuators (analogous to advanced robotics operations) will be essential for ITER and future burning plasmas to protect the facility and to avoid operational limits (for magnets, walls, plasma position, stability, etc.) while optimizing performance.

Existing and Planned Work: In short, an overarching goal here involves developing realistic predictive plasma models of disruptions integrated with a modern plasma control system to deliver the capability to design experiments before they are performed. The associated novel AI-enabled integrated modeling tool would clearly be of great value for the most efficient and safe planning of the expensive discharges in ITER and future burning plasmas. Verification, validation, and uncertainty quantification of associated components would include: (1) development of predictive neural net models of the plasma and actuators that can be extrapolated to burning plasma scales via advanced Bayesian reinforcement learning methods that incorporate prior information into efficient inference algorithms; (2) systematic well-diagnosed experimental validation studies of components in the integrated plasma forecasting models involving massive amounts of data from major tokamak experiments worldwide (e.g., DIII-D in the US, KSTAR & EAST in Asia, JET in Europe, followed by JT60 SA—the large superconducting device in Japan that will precede ITER). This would ideally lead to a mature AI-enabled comprehensive control system for ITER and future reactors that feature integration with full pilot-plant system models.

At present, a key challenge is to deliver significantly improved methods of prediction with better than 95% predictive accuracy to provide advanced warning for disruption avoidance/mitigation strategies to be effectively applied before critical damage can be done to ITER. Significant advances in the deployment of deep learning recurrent and CNNs are well illustrated in Princeton's Deep Learning Code—“FRNN”—that have enabled the rapid analysis of large complex datasets on supercomputing systems. Associated acceleration of progress in predicting tokamak disruptions with unprecedented accuracy and speed is described in Kates-Harbeck et al. (2019). Included in this paper (and extensive references cited therein) are descriptions of FES data representation for physics features (density, temperature, current, radiation, fluctuations, etc.) and the nature of key plasma experiments featuring detectors/diagnostics with frame (event-based) level of accuracy accounting for required “zero-D” (scalar) and higher-dimension signals and real-time resolution recorded at manageable data rates. Rough future estimates indicate that ITER will likely require dealing with the challenge of processing and interpreting exabytes of complex spatial and temporal data.

Since simulation is another vital aspect of ITER data analysis, dealing with the associated major computational expenses will demand the introduction of advanced compressional methods. More generally, real-time predictions based on actual first-principles simulations are important for providing insights into instability properties and particle-phase space dynamics. This motivates the development of an AI-based “surrogate model”—for example, of the well-established HPC “gyrokinetic” particle-in-cell simulation code GTC (Lin et al., 1998) that would be capable of accurately simulating plasma instabilities in real-time. Data preparation and training a surrogate model—e.g., “SGTC”—provides a clear example of the modern task of integration/connection between modern High Performance Computing (HPC) predictive simulations with AI-enabled Deep Learning/Machine Learning campaigns. These considerations also serve to further illustrate/motivate the need to integrate HPC and Big Data ML approaches to expedite the delivery of scientific discovery.

As a final note, the cited paper (Kates-Harbeck et al., 2019) represents the first adaptable predictive DL software trained on leadership class supercomputing systems to deliver accurate predictions for disruptions across different tokamak devices (DIII-D in the US and JET in the UK). It features the unique statistical capability to carry out efficient “transfer learning” via training on a large database from one experiment (i.e., DIII-D) and be able to accurately predict disruption onset on an unseen device (i.e., JET). In more recent advances, the FRNN inference engine has been deployed in a real-time plasma control system on the DIII-D tokamak facility in San Diego, CA. As illustrated in slides 18 through 20 of the attached invited presentation slide deck, this opens up exciting avenues for moving from passive disruption prediction to active real-time control with subsequent optimization for reactor scenarios.

2.12. ML for Wireless Networking and Edge Computing

Context: Wireless devices and services have become a crucial tool for collecting and relaying big data in many scientific studies. Moreover, mobility information has proven to be extremely useful in understanding human activities and their impact on the environment and public health. The exponential growth of data traffic is placing significant pressure on the wireless infrastructure. In particular, inter-cell interference causes large variability in reliability and latency. To meet user demands for data communication and value-added AI/ML services, wireless providers must 1) develop more intelligent learning algorithms for radio resource management that adapt to complicated and ever-changing traffic and interference conditions; and 2) realize many ML/AI computations and functionalities in edge devices to achieve lower latency and higher communication efficiency.

Challenges: Conventional implementations of ML models, especially deep learning algorithms, lag far behind the packet-level dynamics for utility. Moreover, existing ML/AI services are often performed in the cloud for efficiency at the expense of communication overhead and higher latency. A major challenge in the wireless networking and edge computing context is to build a computing platform that can execute complex ML models at relevant timescales (< 10 ms) within small cell access points.

Existing and Planned Work: Researchers have proposed a variety of learning algorithms to perform specific radio resource management tasks using artificial neural networks (Calabrese et al., 2018; Challita et al., 2018; Huang et al., 2020; Zhu et al., 2020). Some of the first proposals to train a NN to perform transmit power control adopts supervised learning (Sun et al., 2018; Liang et al., 2020). More recent proposals adopt deep reinforcement learning approaches that work better with channel and network uncertainties and require little training data a priori (Liang et al., 2019; Zhao et al., 2019b; Meng et al., 2020; Nasir and Guo, 2020). A number of works are focused on the convergence of edge computing and deep learning (Chen and Ran, 2019; Zhang et al., 2019a; Wang et al., 2020d). A specific set of work is on federated learning where participants jointly train their models in lieu of sending all their data to a central controller for training purposes (Amiri and Gündüz, 2020; Niknam et al., 2020; Ren et al., 2020; Chen et al., 2021). All of the preceding work basically ends at the simulation stage for the lack of practical ML/AI solutions that are fast and computationally efficient at the same time. More specifically, the research challenge is to develop a computing platform that can execute complex ML models at a very fast timescale (< 10 ms) and can also be equipped in small cell access points. One project with a potentially very high impact is to map intelligent radio resource management algorithms (such as that of Nasir and Guo, 2020) onto an FPGA device suitable for deployment in a large network of connected and interfering access points. Another interesting project is to build a federated learning system to conduct time-sensitive ML for Internet-of-Things (IoT) devices where transferring data to centralized computing facilities is latency-prohibitive. This opens up entirely new possibilities for low-cost closed-loop IoT devices in healthcare, smart buildings, agriculture, and transportation.

3. Key Areas of Overlap

Real-time, accelerated AI inference show promises in improving the discovery potential at current and planned scientific instruments across the domains as detailed in Section 2. Design of high performant specialty systems for real-time/accelerated AI applications requires particular attention to the figure-of-merit of the target domain's ML algorithm. It might be dominated by its latency per inference, computational cost (e.g., power consumption), reliability, security, and ability to operate in extreme environments (e.g., radiation). For instance, ML might need to: trigger acquisition systems for rare events with ~100 ns latency on the Large Hadron Collider (Duarte et al., 2018); analyze multi-channel ambulatory health monitors at kilohertz frequencies where wireless transfer of data is not possible due to power limitations (~50 iPhone batteries/day for data transfer) or security requirements; or to keep pace with materials spectroscopy data streams on the order of terabits per second (Hart et al., 2017). Furthermore, real-time analysis of advanced scientific instrumentation must have an uninterrupted allocation of computing resources and patient sensitive information processed by wireless health devices must be secured. Such features and characteristics create quantifiable guidelines for understanding distinctions and commonalities among domains and applications. Thereby, we can coordinate efforts toward creating fundamental design principles and tools, which may address needs across seemingly disparate domains. Appropriate data representation is an essential first step of the design process as it determines the choice of NN architecture to be implemented in real-time systems that need to meet the performance targets outlined above. Prominent data representations of different scientific instruments are summarized below. Other areas of overlap across domains such as NN and hardware co-design tools and workflows, NN complexity reduction with quantization and pruning are also recent technology advancements in real-time/accelerated AI and therefore are outlined in Section 4.

3.1. Data Representations

Data representation used in a particular domain influences both the computation system and data storage. One global classification for data representations across domains can be considered as being into raw vs. reconstructed data. The data representation often varies depending on the stage of the reconstruction and the upstream steps in the data processing pipeline. Existing applications include fully connected NNs that often take pre-processed expert feature variables as inputs or CNNs when the data is of image nature. On-going development of domain knowledge-inspired NN algorithms could further take advantage of the expert features in the accuracy and efficiency as detailed below. To fully exploit the power of advanced NNs and bring it closer to data creation for minimum information loss, a more suitable representation of the raw data, e.g., as point clouds, needs to be employed. Prominent representations for raw data from different experimental and measurement systems are:

Spatial Data: Used for describing physical objects in geometric space. There are two main types, called vector and raster data. Vector data, in turn, can be comprised of points, lines, or polygons. Raster data refers to a grid of pixels, such as images, but pixels can also represent other measurements such as intensity, charge, field strength, etc.

Point Clouds: Can be considered a type of spatial data. This data representation is created by collating a set of spatial data, i.e., points in a 3D space, that usually form an object in space collectively.

Temporal Data: Used to represent the state of a system/experiment at a particular time. Data collected across time, in a specific order, is classified in this manner. Time-series data is a subset of this representation, where data is sampled at regular time intervals. An example of time-series data can be seen in Figure 3, for the specific case of supernova classification.

Spatio-Temporal Data: Measurements and observations of a system can be collected across both the space and time dimensions. In that case, the data can be considered spatio-temporal.

Multispectral Data: Used to represent outputs of multiple sensors that capture measurements from multiple bands of the electromagnetic spectrum. Multispectral representation is commonly used in the context of imaging, involving sensors that are sensitive to different wavelengths of light. This usually involves in the order of a few to 10s of spectra.

Hyperspectral Data: Used to represent measurements from a high number of spectra, e.g., in the order of 100s. These images collected from different narrow-band spectra are combined into a so-called hyperspectral cube with three main dimensions. The first two reference the 2D spatial placement (e.g., earth's surface) while the third dimension represents the complete spectrum content at each “pixel” location.

FIGURE 3
www.frontiersin.org

Figure 3. Simulated type Ia supernova light-curve and classification. Top: calibrated flux evolution in different DES band-passes as a function of normalized time (the first photometric measurement is set to time equals zero). Bottom: Baseline RNN classification probability evolution with respect of time, no host-galaxy redshift information was provided. At each photometric measurement, classification probability is obtained. The maximum light of the simulated supernova is shown in a gray dashed line and the simulated redshift of the supernovae is shown on the top z = 0.466. We highlight that redshift is not used for this classification but can improve results. Our baseline RNN classifies this light-curve as type Ia SN with great accuracy before maximum light, it only requires a handful of photometric epochs. (Möller and de Boissiére, 2019).

In Table 1, we match these data representations to scientific application domains and give a brief description. We highlight the data representations which are particularly important for a specific domain. We will give more detailed examples below.

TABLE 1
www.frontiersin.org

Table 1. Types of data representations and their relevance for the scientific domains discussed in this paper; ✓✓= Particularly important for domain, ✓= Relevant for domain.

Cost of data communication (in terms of latency) and data storage (in terms of the cost of acquiring and managing the physical storage resources) present important challenges. Particularly, application domains, which require real-time analysis and/or real-time feedback demand highly optimized data analytics solutions. Applications that rely on hyper-spectral data are faced with an ever-increasing rate of data input across the electromagnetic spectrum. High-speed data reduction is required in these domains. Applications that generate large-scale point clouds similarly demand efficient compression on their spatial data. Application domains that handle multi-spectral data with limited spatial resolution require ultra-fast reconstruction in order to enable real-time control feedback. Another challenge is posed by applications that rely on accurate analysis of streaming time-series data, yet they are forced to perform under highly limited storage and communication resources, either due to privacy and security concerns or limitations of the associated edge devices.

Some current efforts in developing ML solutions to data processing front-ends focus on developing autoencoder based compression engines (Herwig et al., 2020; Loncar et al., 2020). ML-based dimensionality reduction for hyper-spectral data is another direction which has drawn attention (Agar et al., 2019). Deep learning-based approaches are investigated for image reconstruction; the field of material sciences being one of the most active fields in that regards (Schmidt et al., 2019).

3.1.1. Expert Feature DNNs

One straightforward approach to building powerful domain-specific ML algorithms is to start with expert domain features and combine them in a neural network or other multivariate analysis technique. This embedded expertise has inherent advantages because the input features are interpretable, and correlations between features can yield insight into a particular task while optimizing performance. Furthermore, depending on the computational complexity of the domain features, the computation efficiency of such a machine learning approach can be greater than the direct use of raw features. However, the downside is that, by using expert features, we rely entirely on the informativeness of such new features.

Therefore, there is a lot of interest in automating the process of building informative new features from raw features. In image classification tasks, for example, a lot of progress has been made in extracting high-level data representations through deep neural networks DNNs (Goodfellow et al., 2016). In DNNs, layers of neurons above the original input signal are built to ensure that each new layer captures a more abstract representation of the data. Each layer constructs new features by forming nonlinear combinations of the features in the layer below. This hierarchical approach to feature construction has been effective in disentangling factors of variation in the data (Hinton and Salakhutdinov, 2006; Bengio et al., 2013; Goodfellow et al., 2016), and has been useful to construct informative and meaningful representations. In astronomical images, for example, a DNN starts with low-level pixel information, gradually capturing at upper layers edges, motifs, and eventually entire objects (e.g., galaxies) to provide a broad view of the Universe (Dominguez Sanchez et al., 2018; Huertas-Company et al., 2018). The same applies to other fields of science. For example, detecting particles in large accelerators requires transforming low-level signals into dynamic patterns that can be ascribed to specific particles (Belayneh et al., 2020). In medical imaging, there is a need to quickly identify abnormal tissue from low-level pixel information by gradually capturing global tissue patterns (Bychkov et al., 2018). The importance of transforming the initial input data into meaningful abstract representations cannot be overstated: it remains one of the most powerful properties of modern neural network architectures.

Several challenges exist in the construction of increasingly abstract representations using DNNs. One challenge is to incorporate domain knowledge (e.g., physical constraints) into the neural network model. This is important to address the need for excessive amounts of data when training a DNN and narrow the gap in representational bias between the model and target concept. Under scarce data but abundant domain expertise, adding domain knowledge can expedite the training process (Xie et al., 2021), as well as improving the model generalization performance. Another challenge is to develop tools for model interpretability by explaining the semantics of the representations embedded at each layer (Chakraborty et al., 2017). This is challenging due to the distributed representation of information in the network architecture.

Despite the lack of a formal mechanism to attain a seamless integration between a statistical model and domain knowledge, current approaches point to interesting directions, e.g., using knowledge to add training data or to change the loss function (Vo et al., 2017). Model interpretability in DNNs has seen an upsurge in research over the past years (Chakraborty et al., 2017). Commonly, studies look at individual units and their activation patterns to elucidate what is learned across layers of neurons.

3.1.2. Frame-Based Images

Frame-based images are a suitable representation of the experimental data in multiple domains such as neutrino detection with time projection chambers in particle physics. An example of this data representation can be seen in Figure 4 for an electron deposition in the ProtoDUNE neutrino detector. A spatial frame is shown by plotting the time coordinate “Tick” and wire position in space. Recent developments in neural network architectures exploit the sparsity of the images to reduce the computation complexity for real-time/accelerated ML applications. Other types of experimental data in HEP and many other domains can also be processed to be represented as frame-based images, although often not without information loss.

FIGURE 4
www.frontiersin.org

Figure 4. A 6GeV/c electron event in the ProtoDUNE detector. The x-axis shows the wire number. The y-axis shows the time tick in the unit of 0.5μs. The color scale represents the charge deposition.

3.1.3. Point Clouds

Point cloud data representation is often used in HEP, where multiple frames of event-based measurements collected by a large number of detectors are combined into a data set. Across many HEP applications point clouds commonly help to represent particle jets with data sizes exceeding Pb/s. More broadly, point clouds can be used to capture any 3D space event and interactions of moving parts in space. For CMS, remnants of proton-proton collisions create sensors signals in a customized and optimized detector geometry and points are illustrated in space. Various types of scan-based imaging data can be represented as point clouds. Other domains such as CT and PET scanning in biomedical engineering and virtual reality also utilize this representation for imaging. 3D scanners used for product design, solid object modeling, architecture, and infrastructure design leverage point clouds as well. Many of these imaging tasks generate point clouds of sizes in the order of several GB to TB. Domains sharing point cloud representation (e.g., HEP and biomedical imaging) also commonly involve spatial characteristics.

3.1.4. Multi-/Hyperspectral Data

Multispectral data is common between wireless health monitoring and wireless communication systems. A set of physiological sensors, often representing different modalities, are combined into a multispectral data set for health monitoring and intervention systems. For wireless communication, signal interference and network traffic conditions are captured via multispectral data. Both domains capture this data across the time domain, so also exhibit temporal features. Furthermore, in both domains generated data size can be considered relatively smaller (ranging from 100s of Mb/s to 10s of Gb/s), compared to the rest of the domains discussed in this article. Hyperspectral data is used across many astronomy applications, medical imaging, and electron microscopy, which is used to drive many materials science design and discovery applications. An example of hyperspectral data in electron microscopy is shown in Figure 5. An electron probe is rastered over a sample under study and diffraction patterns are captured on a pixelated detector. The pixelated detector captures many images as the electron probe is scanned across the sample. Emerging multimessenger astronomy applications further emphasize the utility of hyperspectral data representations combining observations from a wide array of detectors and telescopes.

FIGURE 5
www.frontiersin.org

Figure 5. Experimental 4D-STEM measurement of a dichalcogenide 2D material. Atomic map is inferred from the data, each diffraction pattern represents an average of 7 × 7 experimental images, green STEM probes are labeled for regions of the sample with one layer, vacuum, and two layers (Ophus, 2019).

3.1.5. Time-Series Data

Time-series data is common in experiments that observe dynamically evolving systems in processes such as synthesis for material discoveries or the temporal evolution of the plasma state in nuclear fusion experiments. It can be a measurement of high-speed temporally resolved imaging in material science or physics features (density, temperature, current, radiation, fluctuations, etc.) or spatial features of evolving plasma state, as a function of time. In-situ diagnostics of the time-series data can either provide alerts to terminate an experiment early that indicates undesired outcome in material science without performing the entire experiment and offline analysis that is time-consuming and computationally expensive, thus improves the experiment operation efficiency and accelerates discoveries of material of desired properties. In the accelerator controls at the Fermilab Booster accelerator, for example, magnet voltages that steer proton beams around a synchrotron are recorded at 15Hz time samples. This study builds a digital twin which is used to simulate the Booster data. Furthermore, to reliably predict and avoid large-scale major disruptions in nuclear fusion experiments, real-time analysis of the time-series data is crucial in guiding the action needed in experimental prediction and control.

3.2. System Constraints

In this section, we present an overview of desired system properties and constraints that are prevalent across a number of application domains. Unique challenges are arising from each scientific application based on sensing technology, the physical processes, and the timescales and data rates, and bandwidth. These system constraints result in specific choices of data processing platforms, often with multiple compute architectures across the data continuum, such as the choice of FPGA-based systems vs. embedded processors, GPUs, or custom ASICs. Table 2 summarizes several scientific application domains along with their event rates, system latency constraints and performance requirements, and deployment characteristics. We broadly define platforms for integration fast machine learning techniques into “soft,” software programmable coprocessors, and “custom,” custom embedded computing devices. Software-programmable systems are often preferred because they are less complex to implement while custom embedded solutions are required when software programmable systems cannot satisfy experimental throughput, bandwidth, or latency constraints. We will describe in further detail this distinction below. Examples of these system design choices are the trigger systems for HEP include LHC reconstruction of collision events, the Belle-II experiment, the Mu2e experiment which deploy custom embedded systems. Meanwhile, experiments like the Electron-Ion Collider have data rates that may not require custom hardware solutions and could deploy only software programmable solutions for event reconstruction and real-time processing experiments. One final distinction worth discussing concerns the nature of real-time processing and the in-situ vs. post-mortem nature of the inference and analysis tasks. Examples that we consider in classifying tasks that have different requirements are: data reduction which primarily focuses on limiting data collection rates of experiments for offline analysis; real-time processing and data analysis which is required to extract real-time domain features of the data for tasks like filtering/triggering; and closed-loop controls where data processing provides direct feedback to the operation and continuous control of an experiment. These distinctions and their consequences on the computing systems is illustrated in Table 3.

TABLE 2
www.frontiersin.org

Table 2. Domains and practical constraints: systems are broadly classified as soft (software-programmable computing devices: CPUs, GPUs, and TPUs) and custom (custom embedded computing devices: FPGAs and ASICs).

TABLE 3
www.frontiersin.org

Table 3. Classification of domains and their system requirements with respect to real-time needs.

3.2.1. Software Programmable Coprocessors

Historically, the first attempts at addressing the computational needs of the problems reviewed in this article have been through software-programmable systems. CPU-based local clusters or cloud services as well as cloud computing resources utilizing GPU or TPU-based hardware accelerators are utilized in different applications. One particular concept explored by the HEP community is the GPU as a Service (GPUaaS) model (Krupa et al., 2020). This can further be expanded into the Machine Learning as a Service concept, similarly explored within HEP (Kuznetsov et al., 2020). These paradigms involve the implementation of machine learning modules to solve a set of physics problems, which are then transferred to GPU or TPU accelerators and accessed by the local CPU “client” of the native experimental system.

One of the major system constraints is the computational capacity, which can be defined in terms of a number of floating point operations as far as neural network implementations are concerned. Real-time machine learning methods require an ever-increasing rate of computational capacity as it directly impacts the latency per task. The task could be a trigger for LHC, reconstruction of an event in accelerator experiments or astrophysics, material synthesis, reconstruction of an image captured by an electron microscope, etc. Extreme parallelism would be desired to provide the highest capacity possible to minimize latency and maximize throughput. In a processor-based system, this can be addressed by increasing the size of the compute cluster. Naturally, facility costs impose a limit on the scale of these clusters. Another constraint is the available amount of storage coupled with the cost of data movement across the memory hierarchy. In the majority of the use cases, the latency involved with moving data from the front-end (detectors, microscopes, sensors, etc.) dominates the total latency. One of the prominent performance constraints is related to the utilization and subsequent latency of the network that links the front-end with the back-end. Current limitations on the speed of data movement renders the CPU/GPU cluster-based systems unable to meet the real-time requirements.

3.2.2. Custom Embedded Computing Devices

As the latency and throughput constraints are coupled with challenging practical energy constraints, efforts have been directed toward specialized computing systems to address the hard real-time needs. An increasingly attractive paradigm is to design components that are finely optimized for specific steps in the data capture workflow. These components can be mapped onto FPGA devices or they can be designed and manufactured as an application-specific integrated circuit (ASIC). In the LHC and accelerator domains, there is a rich set of FPGA-based demonstrations of front-end data processing systems, which meet microsecond latencies. These systems are in charge of tasks such as triggering, event reconstruction, and anomaly detection. Direct and naive implementations of neural networks to perform inference for these tasks can fail to meet the latency requirements since they often incur significant resource utilization. The highest achievable FPGA clock frequency and inference latency is correlated with the resource utilization and percentage occupancy of the device. Co-design techniques developed for these applications particularly specialize in extreme quantization and pruning (with an awareness of accuracy) so that resource requirements can be controlled aggressively to ensure inference latency targets. These optimizations push the resource usage envelope as far as down as 10s of percent of the FPGA device in order to meet the system constraints and yet demonstrate implementations with high inference accuracy.

Some other applications (e.g., accelerator controls, biomedical and health applications) impose less stringent latency expectations, in the order of ms, where the urgency for resource minimization is alleviated. Hence, the focus of the system design can shift from extreme resource economy to enhanced sophistication in the algorithms that are being mapped to the device. Inference models can now include deep(er) learning models coupled with advanced video and signal processing engines, as well as local privacy-preserving processing tasks (applicable particularly to mobile health and networking and communication applications).

For mobile and IoT-based deployment of the edge devices, resource efficiency emerges as an important factor as it impacts energy consumption. However, in these applications, energy efficiency can also be achieved by alternative means. One option would be selective powering, i.e., creating a resource-rich full-featured baseline implementation, which still comfortably meets latency constraints if energy was not an issue, and introducing power gating or standby features to modulate energy consumption during periods of low/no activity.

There are system constraints, which point the designers to a custom ASIC solution in addition to or in place of FPGA devices. ASICs can address extreme form factor considerations, integration of computation with sensing (e.g., smart photon detectors) into compact front-end devices, tight integration with other mixed-signal or analog functionalities, radiation hardening requirements, and ultra-low energy budgets.

4. Technology State-of-the-Art

In this section, we aim to give an overview of technologies and techniques for building fast ML algorithms. This requires codesign: building algorithms with hardware in mind and providing efficient platforms for programming the hardware. Sections 4.1, 4.2 focus on neural network design and training for efficient implementation in hardware. In Sections 4.3, 4.5, we classify our discussion of ML hardware compute platforms into two categories: “Conventional CMOS Hardware” and “Emerging Beyond CMOS Hardware.” The former will address nearer-term hardware solutions, while the latter will focus on the speculative end of the spectrum. Meanwhile, because the area of programming new hardware is rapidly moving, we lay out an example of the options and challenges for one device family: FPGAs. This is presented in Section 4.4, and from the details for FPGAs we hope the reader also gets a sense of the fundamental approaches for designing software for emerging hardware.

4.1. Systematic Methods for the Efficient Deployment of ML Models

As discussed in Section 2, many ML problems in science require low latency, often with constrained resources. However, most of the current state-of-the-art NN models have prohibitively high latency with a large memory footprint and energy consumption. For this reason, practitioners have been forced to use sub-optimal models (e.g., shallow NNs) with non-ideal accuracy to avoid this latency problem. There is a large body of literature that has focused on solving this problem by making NN models more efficient (in terms of latency, memory footprint, and energy consumption). These efforts could be broadly categorized as follows: (i) Designing new efficient NN architectures; (ii) NN and hardware co-design; (iii) Quantization (low precision inference); (iv) Pruning and sparse inference; and (v) Knowledge distillation. Here we briefly discuss each of these approaches.

Designing New Efficient NN Architectures: One line of research has been focused on finding new NN models that are efficient by design. A notable early work is SqueezeNet (Iandola et al., 2016), a new NN model without any expensive Fully Connected layers, along with a new lightweight Fire module, that resulted in a 50 × smaller model as compared to AlexNet, but with the same accuracy. Later on, several new innovations were made in efficient NN architecture design. One focus has been to find efficient layers/operators. Notable works are group convolutions (Ioannou et al., 2017), depthwise convolutions (Howard et al., 2017), spatial separable convolutions (Mamalet and Garcia, 2012), shuffle layers (Ma et al., 2018), and shift convolutions (Wu et al., 2018a), to name a few.

Another focus has been to find similar substitutes to Fire module that are more efficient and result in better accuracy/generalization. Notable works include residual networks (He et al., 2016) (originally designed to solve issues with vanishing gradients, but these structures are generally more efficient than non-residual architectures), densely connected networks (Huang et al., 2017), squeeze-and-excite modules (Hu et al., 2018a), and inverted residual blocks (Sandler et al., 2018).

These classical techniques mostly found new architecture modules through a manual design search. This is not scalable, and as such recent approaches have proposed automated methods that use neural architecture search (NAS). NAS methods automatically find the right NN architecture for a given constraint of model size, depth/width, and/or latency. The high-level approach here is to train a probabilistic SuperNet that includes all possible combinations of NN architectures within the prescribed constraints, but with learnable probabilities. After this SuperNet is trained, one can sample an architecture from its learned probability distribution. Notable works include RL based methods (Zoph and Le, 2016), efficient NAS (Pham et al., 2018), MNasNet (Tan et al., 2019), DARTS (Liu et al., 2018), and Differentiable NAS (Wu et al., 2019).

NN and Hardware Co-design: Another promising line of work has been to tailor the NN architecture for a specific hardware platform, and/or co-design them together. This is quite promising for configurable hardware such as FPGAs. The importance of hardware-aware NN design is that the cost of performing different types of operations varies for different hardware. For example, hardware that has a dedicated cache hierarchy can execute bandwidth bound operations much more efficiently than hardware without a cache hierarchy. Notable works in this area include SqueezeNext (Gholami et al., 2018), where both the NN and the hardware accelerator were co-designed with a manual tuning approach. More recent works have proposed to automate hardware-aware design through NAS. Notable works include ProxylessNAS (Cai et al., 2018), OnceForAll (Cai et al., 2019b), FBNet (Wu et al., 2019), and MobileNetV3 (Howard et al., 2019).

Quantization (Low Precision Inference): A common solution is to compress NN models with quantization (Asanovic and Morgan, 1991; Hubara et al., 2016; Rastegari et al., 2016; Zhou et al., 2016, 2017; Cai et al., 2017, 2020b; Choi et al., 2018; Jacob et al., 2018; Zhang et al., 2018a; Dong et al., 2019; Wang et al., 2019c; Chin et al., 2020; Gholami et al., 2021), where low bit-precision is used for weights/activations. A notable work here is Deep Compression (Han et al., 2016), which used quantization to compress the model footprint of the SqueezeNet model discussed above, bringing its size to 500x smaller than AlexNet. In quantization, the model size is reduced without changing the original network architecture, and it could potentially permit the use of low-precision matrix multiplication or convolution. Therefore, both the memory footprint and the latency could be improved.

The quantization methods can be broadly classified into two categories of Post-Training Quantization (PTQ), and Quantization-Aware Training (QAT). In PTQ, a pre-trained model in single precision is quantized to low precision without any fine-tuning or re-training (Banner et al., 2018; Lee et al., 2018a; Choukroun et al., 2019; Meller et al., 2019; Nagel et al., 2019; Zhao et al., 2019c; Cai et al., 2020b; Fang et al., 2020a,b; Hawks et al., 2021). As such, these quantization methods are typically very fast, and, in some cases, do not even require any training data (Nagel et al., 2019; Cai et al., 2020b; Haroush et al., 2020). However, PTQ often leads to high accuracy degradation, especially for low precision quantization. To address this, some quantization methods adopt QAT to re-train the model after the quantization, so that the parameters can get adjusted. This approach often results in higher accuracy, but at the cost of longer time associated with re-training the model (Courbariaux et al., 2015; Lin et al., 2015; Hou et al., 2016; Hubara et al., 2016; Rastegari et al., 2016; Zhou et al., 2016, 2018a; Zhu et al., 2016; Cai et al., 2017; Gysel et al., 2018; Huang et al., 2021).

Another differentiator is the use of simulated quantization (aka fake quantization), vs. integer-only quantization (Lin et al., 2016; Jacob et al., 2018; Yao et al., 2020b; Kim et al., 2021). In the former, the weights/activations are stored in low precision, but they are cast to higher precision during inference. In the latter, there is no casting involved, and the multiplication and accumulation also happen in low precision. Using integer-only quantization has the advantage that one can speed up inference by using low-precision logic for multiplication and addition, besides reducing the memory footprint of the model.

Another distinction is hardware-aware quantization. Similar to NN architecture design, quantization can also be tailored for specific hardware platforms. This becomes important for mixed-precision quantization (Wu et al., 2018b; Zhou et al., 2018b; Dong et al., 2019, 2020, 2021; Wang et al., 2019b; Shen et al., 2020; Yao et al., 2020b). The reason is that certain operations in the NN model may benefit more from low precision quantization than others, based on whether they are bandwidth bound or compute-bound. As such, as schematically illustrated in Figure 6, one must determine the best precision setting based on the tradeoff between the potential footprint/latency gain and the sensitivity to accuracy degradation.

FIGURE 6
www.frontiersin.org

Figure 6. The illustration of hardware-aware quantization and pruning. A given NN model can be compressed by using low precision quantization instead of single precision. The extreme case is to use 0-bit quantization which is equivalent to removing/pruning the corresponding neurons. The goal of compression is to find the best bit-precision setting for quantization/pruning to reduce model footprint/latency on a target hardware with minimal generalization loss.

Pruning and Sparse Inference: Another approach reducing the memory footprint and computational cost of NNs is to apply pruning, which could be thought of as quantization to 0-bits. In pruning, neurons with small saliency (sensitivity) are removed, which results in a sparse computational graph (LeCun et al., 1990). Here, neurons with small saliency are those whose removal should minimally affect the model output/loss function. Pruning methods can be broadly categorized into unstructured pruning (LeCun et al., 1990; Hassibi and Stork, 1993; Dong et al., 2017; Lee et al., 2018b; Xiao et al., 2019; Park et al., 2020), and structured pruning (Luo et al., 2017; He et al., 2018; Huang and Wang, 2018; Lin et al., 2018; Yu et al., 2018; Zhao et al., 2019a). Unstructured pruning removes neurons without any structure. With this approach, one can remove most of the NN parameters with little impact on the generalization performance of the model. However, this approach leads to sparse matrix operations which are hard to accelerate and are typically memory-bounded (Buluc and Gilbert, 2008; Gale et al., 2019; Blalock et al., 2020; Hoefler et al., 2021). This can be addressed with structured pruning, where a group of parameters (e.g., an output channel) is removed. However, the challenge here is that high degrees of structured pruning often lead to significant accuracy degradation.

In both approaches, the key question is to find which parameters to prune. A simple and popular approach is magnitude-based pruning (Hanson and Pratt, 1988; Mozer and Smolensky, 1988; Chauvin, 1989; Li et al., 2016b; He et al., 2017, 2019; Liu et al., 2017; Lin et al., 2020a). In this approach, the magnitude of parameters is used as the pruning metric. The assumption here is that small parameters are not important and can be removed.

An important problem with magnitude-based pruning methods is that parameters with small magnitudes can actually be quite sensitive. It is easy to see this through a second-order Taylor series expansion, where the perturbation is dependent on not just the weight magnitude but also the Hessian (LeCun et al., 1990). As such there are several works that use second-order based pruning (LeCun et al., 1990; Hassibi and Stork, 1993; Hassibi et al., 1993; Wang et al., 2019a; Yu et al., 2021).

Finally, we should mention that it is possible to combine pruning and quantization together to compress the NN model. In fact, pruning could be viewed as quantization to 0-bits. The recent work of Hawks et al. (2021) proposes a quantization-aware pruning method and applies to high energy physics problems; It reports better results than pruning or quantization alone.

Knowledge Distillation: Model distillation (Romero et al., 2014; Hinton et al., 2015; Li et al., 2017; Mishra and Marr, 2017; Yim et al., 2017; Polino et al., 2018; Ahn et al., 2019; Yin et al., 2020) trains a large model and then uses it as a teacher to train a compact model. Instead of using class labels during the training of the student model, the key idea of model distillation is to leverage the soft probabilities produced by the teacher, which can guide/help the student training.

Previous methods of knowledge distillation focus on exploring different knowledge sources. Hinton et al. (2015), Li et al. (2017), and Park et al. (2019) use logits (the soft probabilities) as the source of knowledge, while Romero et al. (2014), Yim et al. (2017), and Ahn et al. (2019) try to leverage the knowledge from intermediate layers. The choices of teacher models are also well studied, where You et al. (2017) and Tarvainen and Valpola (2017) use multiple teacher models to jointly supervise the student model, while Crowley et al. (2018) and Zhang et al. (2019b) apply self-distillation without an extra teacher model. Other previous efforts apply knowledge distillation with different settings on different applications. Lopes et al. (2017), Nayak et al. (2019), and Yin et al. (2020) study data-free knowledge distillation, and Wang et al. (2018a) and Wang et al. (2020e) combine knowledge distillation with GANs.

A major challenge of knowledge distillation methods is to achieve a high compression ratio. Compared to quantization and pruning which can usually maintain accuracy at 4 × compression, knowledge distillation methods tend to have non-negligible accuracy degradation at those compression levels. But these two approaches are orthogonal, and recent works have shown that their combination can result in high accuracy/compression (Mishra and Marr, 2017; Polino et al., 2018; Mao et al., 2020; Yao et al., 2020b). It should be mentioned that current distillation methods are mostly applied to classical ML problems, and few works have looked into their application in Science AI problems.

4.2. Systematic Neural Network Design and Training

There is currently no analytical approach to find the right NN architecture for a given task and training dataset. Originally, designing the NN architecture was mostly a manual task with intuitions that were often ad-hoc. However, in recent years there has been a lot of innovations in automating the NN architecture design process, which is referred to as Neural Architecture Search (Zoph and Le, 2016; Cai et al., 2018, 2019b; Liu et al., 2018; Pham et al., 2018; Tan et al., 2019; Wu et al., 2019).

NAS could be viewed as a hyperparameter tuning problem, where the hyperparameters are the design choices for a NN architecture. This could include width, depth, types of operations, etc. The main challenge is that the search space for the operation types scales exponentially with the number of layers. As such, one has to still include some high-level intuition about the NN architecture to limit the search space.

After limiting the search space, the general NAS process is as follows: A candidate architecture is sampled from the set of all possible architectures and is then trained for a number of epochs on the training dataset. The accuracy is then used as the metric to evaluate how good that candidate architecture is. Then based on this reward, the probability distribution of sampling architectures is updated. This process needs to be repeated for many different candidate architectures (sometimes exceeding hundreds of thousands). Inherently, this leads to another problem related to tuning the optimization hyper-parameters for each candidate architecture. For example, if a good architecture is sampled from the NAS but is trained with sub-optimal hyperparamters, then the error will be high and the NAS algorithm will reduce the likelihood of sampling that architecture which is not the desired property.

As a result, scalability has become an integral concern for any procedure in the presence of “big data.” One main class of procedures for which scalability has become indispensable is in numerical optimization algorithms, which are the core of training methods. There is a large body of literature on designing efficient numerical optimization/training methods (Gupta et al., 2018; Reddi et al., 2018; Shazeer and Stern, 2018; Zhang et al., 2019c; Ginsburg et al., 2020; Liu et al., 2020a; Ma, 2020; Park et al., 2020; Yao et al., 2020c; Zhuang et al., 2020) as well as efficient NAS algorithms to search for the right NN architecture (Zoph and Le, 2016; Liu et al., 2018; Pham et al., 2018; Tan et al., 2019; Wu et al., 2019).

For the optimization, the goal is to design new methods that require fewer iterations to converge and are more robust to hyper-parameter tuning. One notable advancement here is the ability to apply second-order methods without the need for forming the second-order operator (Gupta et al., 2018; Reddi et al., 2018; Yao et al., 2019, 2020c). It has been shown that the performance and robustness of these methods are higher than first-order optimization methods on classical ML problems (e.g., in computer vision or natural language processing). Interestingly, some recent results for Physics Informed Neural Networks (PINN) (Raissi et al., 2019) have found that first-order methods work significantly sub-par to (quasi) second-order methods. This could potentially provide opportunities to adapt or redesign some of the second-order algorithms for Science problems.

For the NAS algorithms, the goal is similar, which is to find methods that require evaluating fewer candidate architectures, with less manual restriction or tuning of the search space. Another goal is to design transferable NAS algorithms that can be trained on a small problem and then transferred to larger problems that are more expensive (Cai et al., 2018, 2019b).

In summary, the core of designing NN architecture is to have a fast method of sampling architectures (through NAS), and the fast training of the sampled architectures (through fast and robust optimization algorithms).

4.3. Hardware Architectures: Conventional CMOS

As the prevalence and demands for machine learning rapidly continue to grow, it is increasingly important that we design machine learning algorithms efficiently and simultaneously deploy them on complementary and powerful hardware platforms. The compute and memory demands of NN deployments are huge and growing beyond the limits to where standard silicon-based semiconductors can scale. The reasons behind the scalability challenges in the semiconductor industry are as follows: Firstly, as we approach the End of Moore's Law, transistor cost has been exponentially rising due to rising chip design costs with shrinking technology nodes (as published by Xilinx and Gartner in 2011 already Trimberger, 2018). Furthermore, with the end of Dennard scaling, we've encountered considerable thermal challenges as power density no longer remains constant between node generations. To mitigate the challenges of increasing thermal density, chips are now designed to conditionally deliver power to groups of transistors, effectively throttling or "turning off" parts of a chip. This technique has come to be known as creating dark silicon (Esmaeilzadeh et al., 2011).

To overcome these challenges and provide sufficient compute capabilities, many disruptive approaches have been proposed. For example, Cerebras Systems (Cerebras, 2019) has brought to market the first computer system which employs wafer scale integration. where chips are built from complete wafers rather than individual dies. Such a technique brought with it substantial engineering challenges in regards to power delivery, packaging, and cooling. Exploring the other dimension, foundries are investigating true 3D chip stacking as was presented at HotChips'2019 by TSMC (Hotchips, 2019). Even analog computing, quantum computing, and in-memory computing are investigated as well. See a more detailed discussion on beyond CMOS neuromorphic computing in Section 4.5 below.

Less risky approaches focus on moving away from traditional von Neumann architectures, using specialization of compute architectures to provide the necessary performance scaling and energy efficiency. Due to the specialization, the devices become increasingly heterogeneous. A huge range of devices has emerged that all try to address this problem in different ways, whereby the key challenge is: How do we loop transform and unfold the algorithms best to maximize data reuse and compute efficiency, minimize memory bottlenecks, and limit power consumption while meeting real-time requirements?

The choice of hardware type and quantity often boils down to a set of constraints imposed by compute environment (datacenter, cloud, on-premise, edge, mobile), workload type (inference, training), data type (Language, Time Series, Vision, Graph, etc.), ML model, usage model (online inference, batch jobs), and user-centric Service-Level Agreements (encryption level, request latency, etc). For large datacenter deployments handling various types of workloads, it is often the case that several platforms must be combined to reduce Total Cost of Ownership (ToC) across all their hardware platforms. It has therefore become increasingly necessary for owners of heterogeneous platforms to think of their systems as large-scale multi-processor computers, a trend sometimes termed Warehouse Scale Computing (Luiz André Barroso, 2009). For Deep Learning hardware accelerators, these new computers generally take the form of CPU co-processors: a host CPU communicates with other entities in the datacenter, interfaces with disk memory, and formats input data which is then offloaded to the accelerator responsible for executing a user-defined compute graph, or Neural Network.

We begin with a taxonomy of these hardware architectures and discuss their relevant characteristics when it comes to the acceleration of machine learning workloads. This is essential to understand how they will differ in their execution behavior, what it takes to leverage their unique features and how they can potentially benefit from previously introduced optimization techniques.

Taxonomy of Compute Architectures for Deep Learning: A broad range of hardware architectures to deploy machine learning algorithms exists today. We can broadly classify them by the following criteria:

1. Basic type of compute operation

2. Inherent support for specific numerical representations

3. External memory capacity (which is mostly relevant for training workloads)3

4. External memory access bandwidth

5. Power consumption in the form of thermal design power (TDP)

6. Level of parallelism in the architecture and the degree of specialization

As is shown in Figure 7, we classify the compute architectures into scalar processors (CPUs), vector-based processors (GPUs), and so-called deep learning processing units (DPUs), although realistically these categories blend to some degree. DPUs are specialized for this application domain whereby we distinguish the more generic matrix- or tensor-based processor and a spatial processing approach. DPUs can be implemented with either ASICs or FPGAs. All of these architectures will be discussed individually below.

FIGURE 7
www.frontiersin.org

Figure 7. Taxonomy of compute architectures, differentiating CPUs, GPUs and DPUs.

CPUs: CPUs are widely used for ML applications and are viewed as largely serial or scalar compute engines (even though high-end variants for cloud deployment may have up to 10s of cores). They are optimized for single-thread performance, with implicitly managed memory hierarchies (with multiple levels of caches), and support floating point operations (FP64 and FP32) as well as 8bit and 16bit integer formats with dedicated vector units in most recent variants. Theoretical peak performance tops at 6.8TOPs for FP64 assuming boost clock speed (Cascade lake, 56 cores, 3.8GHz). External memory is currently primarily leveraging DDR4 memory banks with large capacities: Intel's Cascade Lake offers up to 4.5 TebiByte (240 Bytes) which is beyond what any of the other device categories can offer. Access is at maximum speed through high-end hardened memory controllers, offering 282 Gbps bandwidth (for example Cascade Lake with 12 DDR4 channels). Compared to GPUs and other HBM-enabled devices, the memory bandwidth of CPUs is lower. However, for many use cases, this can be compensated through their sophisticated cache hierarchies, combined with mature compiler tools. Regarding power consumption, CPUs are at the upper end of the spectrum with high-end devices range up to 400 W (Cascadelake, 2019). In the embedded space, ARM processors provide generally popular solutions, in particular when performance requirements are very low and when functionality is required that is not supported by the specialized device variants. In particular, the Ethos (Skillman and Edso, 2020) family of processing cores is specialized for CNN workloads and as such is considered under the DPU category below. The advantages of CPUs are the generality of the hardware, as well as the ease of programming where design environments have matured over decades. As expected this comes at the cost of lower peak performance and less efficiency compared to the more specialized device families. In regards to quantization, CPUs can only leverage this optimization technique for INT8 and INT16 if supported.

GPUs: GPUs are SIMD-based (Single Instruction, Multiple Data) vector processors that support smaller floating point formats (FP16) natively, as well as fixed point 8-bit and 4-bit integer formats more recently, and have a mix of implicitly and explicitly managed memory. NVIDIA GPUs are some of the most popular hardware targets for machine learning, and newer families of chips have been introduced to specifically accelerate this workload, with AMD not far behind. The latest devices in NVIDIA's Volta and Turing architecture families, introduced in 2018 and 2019, respectively, offer up 130TOPs in FP16, which is beyond the capabilities of the latest CPU generations. As such they are amongst the highest performant devices in the market for the acceleration of DNNs as they can exploit the high degree of parallelism inherent in this application via increasingly specialized architectural features. For example, NVIDIA's Volta is the first generation to incorporate tensor cores as a new feature, as well as improved FP32 and FP64 support for training in a data center setting (Durant et al., 2017), and also introduced a deep learning accelerator (DLA) in their embedded devices to further reduce power consumption. This specialization brings additional challenges for their usage; there are up to 3 distinct execution units now, namely CUDA cores, tensor cores, and the DLA, which don't operate on the workload simultaneously (at least not easily or by default). We, therefore, don't sum up the peak performance of different execution units, but use only the maximum. AMD announced the Vega GPU (Exxactcorp, 2017) with new deep learning instruction set operations, with the goal of obtaining parity with NVIDIA's high-end Tesla V100 datacenter GPUs. Also, AMD's most recent EPYC family supports customized instructions for deep learning (Epyc, 2019). Both companies offer also low power GPUs for the embedded space, namely the AMD Vega mobile GPU (Hardawar, 2018) and NVIDIA Jetson TX2 (Franklin, 2017) and AGX family (AGX, 2019).

In regards to memory, GPUs leverage specialized and highly pipelined GDDR memory, which reduces capacity, but offers much higher bandwidth (up to 732GBps). With NVIDIA's Turing family the latest devices include HBM2 DDR memory stacks (Turing, 2019), which scales the memory access bandwidth to 1TBps and beyond. Again this is particularly important to address the needs of training workloads. For the same reason, some of the DPUs introduce HBM2 as well, as discussed below. In regards to power consumption, GPUs are high, up to 345 W.

One general challenge for GPUs is that they need to leverage input parallelism to achieve high utilization of their large compute arrays. Therefore, before execution inputs need to be grouped into batches, which has adverse effects on end latency. Further, GPUs are relatively high in power consumption. Regarding quantization, support is limited to the inherent datatypes, which are INT4 at smallest in the context of NVIDIA's Turing family, and INT8 for many of the others. Finally, the corresponding software environments for GPUs, while not on the same level as CPUs, have matured significantly and provide increased ease of use.

FPGAs and ASICs: FPGA and ASIC customize hardware architectures to the specifics of a given application. They can be adapted in all aspects to suit a use case's specific requirements. This includes their IO capability, their functionality, or even to suit specific performance or efficiency targets. FPGAs can be reprogrammed whereas ASICs are fully hardened. This flexibility allows for amortizing the design costs of the circuit across many applications but comes at the expense of hardware resource cost and performance.

FPGAs are a popular choice for the acceleration of CNNs. Traditionally, an FPGA compute fabric consist of a sea of lookup tables (LUTs) which are interconnected through a programmable interconnect. The latest generations host millions of LUTs. Furthermore, the fabric is interspersed with specialized hardened compute blocks (DSPs) which accelerate n-bit multiply accumulate operations (MACs), as well as SRAM blocks. The latter are referred to as block RAMs (BRAMs), which hold 36 kbits, and Ultra RAMs (URAMs) which store 288 kbits. More recent FPGA generations combine multiple FPGA dies, referred to as super logic regions (SLRs), and leverage a silicon interposer to provide connectivity between SLRs. This technology is referred to as stacked silicon interconnect (SSIT) and helps scale device capacity.

DPUs: As mentioned at the beginning, the term DPU (short for deep learning processing unit) refers to a new type of compute architecture, specialized for the acceleration of CNNs. DPUs are customized for these types of applications in a number of ways: types of operations supported, direct support of tensors or matrices, inherent data types and supported numerical representations, macro-architecture, explicitly managed and specialized memory hierarchies, and which levels of parallelism they exploit (input, output pixel, IFM, OFM, bit, and layer and branch parallelism) as was introduced in the first part of this chapter. We differentiate two types of DPUs, which can be implemented with both ASIC technology and FPGAs.

Matrix of Processing Elements (MPE): The first type, as shown on the left side of Figure 8, consists of an MPE that operates on matrices or higher dimensional tensors. The processing engines can be simple MACs, vector processors, or more complex VLIW (Very Long Instruction Word) cores that can support concurrent execution of different instructions. A popular example in this category is Google's Tensor Processing Unit (TPU). Introduced in 2016 (Sato et al., 2017), it was originally designed to accelerate Google's TensorFlow framework. The first generation supported integer arithmetic with a massively parallel INT8 matrix-multiply engine. The second generation TPU was announced in May 2017 (Jouppi et al., 2017), and the third generation in May 2018 (Teich, 2018). These newer chips boast improved memory performance as well as support for floating point specifically aimed at training. There are a number of startups introducing custom hardware that fall into this category. Within the cloud, there are Graphcore, Groq, and Wave Computing. Within the embedded space, where the design constraints are even more stringent, we find even more solutions. Most are secretive about the details of their designs. Intel is investigating several custom accelerators and has for that purpose acquired a number of startups, namely Nervana, Habana, and Movidius. Fathom (Armasu, 2016) is Movidius' ultra low power Neural Compute Stick (NCS) which operates at about 1 W. Also, ARM offers specialized CNN processors in the form of their Ethos family, boosting performance up to 4TOPs with support for INT8 and INT16 datatypes.

FIGURE 8
www.frontiersin.org

Figure 8. DPU architectures: Matrix of Processing Engines (MPE) on the left, and spatial architecture on the right.

As mentioned above, DPUs provide specialized datatypes to execute heavily quantized, reduced precision CNN implementations. At the extreme, binarized neural networks (which are very high throughput at extremely low power) are exploited in the following ASICs: BinarEye (Moons et al., 2018), BNN Custom Fabric (Ando et al., 2017), and IBM AI Accelerator (IBM, 2018). Also, Lattice has announced binarized neural network libraries targeting low power FPGA and achieving 1 TOPs/W (Lattice, 2018). Custom floating point representations are also considered. For example, Microsoft's Brainwave project (Chung et al., 2018) uses this approach with the aim of applying FPGAs to CNNs at datacenter scale. However, typically the hardened versions in ASICs only support INT8, as lower precisions could potentially limit their application scope. FPGA-based MPE implementations such as Xilinx's xDNN are less constrained and in principle can be customized as needed.

Similar to the GPU, but perhaps to a lesser degree, DPUs leverage input, IFM (input feature map) and OFM (output feature map) parallelism, which requires buffering of inputs and may have adverse effects on latency as well. A particular challenge arises in the context of software environments, which differ for all vendors and are less mature than what we have observed for CPUs and GPUs. Typically, they are limited to support execution of very specific layer types (sometimes even restricted in regards to parameter ranges) and neural networks, whereby the range of layer types and neural network models is continuously expanding.

In summary, through their specialization, these implementations minimize hardware cost, maximize performance and optimize efficiency by exploiting specific precision arithmetic with a specialized instruction set and customized memory system. However, in order to gain a performance advantage, the algorithms need to be adapted to leverage these features.

Spatial DPUs: The second type of DPU leverages spatial acceleration and exploits layer and branch parallelism. Popular examples are hls4ml and FINN (Umuroglu et al., 2017; Blott et al., 2018). To that extent, the hardware architecture is even further specialized to the specifics of a given deep learning topology. This is visualized on the right side of Figure 8. The hardware architecture actually mimics the given deep learning topology and the inputs are streamed through the architecture. Every layer is instantiated with a dedicated compute datapath. Each layer has a dedicated weight buffer, and activation buffers in-between layers are FIFOs of minimal size. They buffer just enough data to feed the next set of convolutions in the next layer. This is substantially more efficient compared to the first type of DPUs or GPUs and yields reduced latency.

DPUs and GPUs generally perform a layer-by-layer compute, where a sequence of images has to be buffered in order to extract maximum compute out of the platform (input, IFM and OFM parallelism). For this, the device buffers a batch of images before computing the first layer of all images. Then all intermediate results are buffered, and then the next layer is computed, and so on. Hence the latency is heavily dependent on the size of the input batch.

As a result, spatial DPUs have an advantage in regard to latency. This level of customization is only possible with programmable hardware architectures such as FPGAs, as they can adapt the hardware architecture for different use cases. This generally wouldn't make sense in the context of an ASIC accelerator, as that would yield an ASIC only capable of accelerating one specific topology, which would be far too restrictive in scope. The limitation in spatial architectures is the scalability in the numbers of layers. Each layer comes at a resource cost overhead and there is a maximum number of layers that can be created within a single device. As a result, some extremely deep CNNs might not be able to fit into a single device. Microsoft's Brainwave project leverages spatial computing and overcomes this limitation with a distributed approach (Chung et al., 2018).

Once a spatial DPU has been leveraged and the architecture is specialized for a very specific CNN, the architecture can be further customized in regards to minimum precision. By supporting only the bits as needed per layer of the CNN they can achieve even higher performance and efficiency, while in an MPE, the hardware will support the maximum precision that is required over the whole network. In regards to customized precisions and spatial architectures, FINN has pioneered the first binarized neural network accelerators (Fraser et al., 2017; Umuroglu et al., 2017) and provided many proof points for customized reduced precision implementations (Blott et al., 2018). This flexibility comes at a cost, in the form of programming complexity, and they are extremely difficult to characterize in general, as the performance characteristics depend on the specifics of the hardware architecture that has been implemented.

Further Variants of DPUs: Beyond the previously discussed spatial DPUs and MPEs, there are many more variants. Some exploit sparse computing engines for example, such as EIE and its successor ESE (Han et al., 2017), SCNN (Parashar and Rhu, 2017), Cnvlutin (Albericio et al., 2016), Cambricon-S and Cambricon-X (Zhang et al., 2016). These are the only architectures that can benefit from irregular sparsity. Finally, another dimension for customization of precision is to optimize over the execution- or run-time of a CNN. In other words, beyond using statically fixed reduced precision, where the hardware operates with a fixed precision for all variables, some approaches explore run-time configurable bit precision which allows for the exploitation of bit-parallelism in the arithmetic. On the hardware implementation side, this can be exploited with run-time programmable precision and is effective with bit-serial implementations. For example Umuroglu et al. (2018) demonstrate with BISMO that bit-serial can provide highly attractive performance with minimal overhead on FPGAs, while Judd et al. Judd et al. (2016) show the same is true for ASICs with their prototype ASIC called Stripes. While this concept can be applied to both MPE and spatial architectures, it makes the most sense for MPEs.

Summary of Conventional CMOS Hardware Architectures: We analyzed three categories of hardware architectures that are leveraged for CNN inference, namely common CPUs, SIMD-based vector processors such as GPUs, and DPUs which are specialized architectures for the acceleration of deep learning workloads. An overview of the architectures is visualized in Table 4. Please note, "Ease of Use" includes compute kernel programmability as well as general ease of use. The degree of specialization includes operators, precision support, and customization toward topologies. In summary, for DPUs, we distinguish between tensor processors which leverage a matrix of processing engines and spatial architectures which can be further specialized for specific topologies using FPGAs. CPUs are the most general solution but high in power. GPUs and DPUs offer the highest performance, though GPU are more expensive in energy cost. Spatial DPU architectures excel at low latency and provide the highest compute efficiency through maximized customization. CPUs, GPUs, and DPUs (MPE) use a sequential layer-by-layer compute model whereas spatial DPUs execute all layers of the network concurrently. Hardened topologies in form of ASICs, CPU and GPU offer a fixed set of native dataypes, whereas FPGAs can adopt any precision and numerical representation, which provides the utmost flexibility and leverages optimization with quantization to the maximum, whereas hardened approaches need to default to the next higher supported precision into which the reduced precision variable can be embedded. However, the programmability in the FPGA fabric also comes at a speed and energy cost. All architectures can benefit from coarse-grained pruning optimization techniques. Only sparse execution engines can benefit from irregular pruning, such as synaptic pruning. We also discussed the various deployment options. Many devices offer different power and operating modes as different compromises between throughput and power consumption to adapt to the potentially very different optimization targets of different application settings. Similarly, batch sizes, thread counts and stream sizes offer another compromise in regards to throughput vs. latency. Again this is to facilitate a spectrum of different use cases. Finally, the table shows that speculative approaches such as Cerebras can bring fundamental performance scalability. Overall, each approach comes with its own advantages and disadvantages and the best solution greatly depends on the specifics of a given use case.

TABLE 4
www.frontiersin.org

Table 4. Characterization of types of hardware based on important metrics.

4.4. Hardware/Software Codesign Example: FPGA-Based Systems

In the last decade, we have observed the rise of two significant paradigms that have come to scientific applications: heterogeneous-computing systems and machine learning. Heterogeneous computing can overcome the decline of Moore's Law and Dennard Scaling and achieve the desired computational cost and performance by executing portions of the applications on the best-matched hardware, e.g., CPU, GPU, ASIC, and FPGA. On the other hand, machine learning is an automatic process that creates programs that can solve classes of problems. As with traditional programming, machine learning can significantly benefit from heterogeneous computing; in addition, designers can tailor specialized but reprogrammable hardware to fit ever-changing machine learning requirements. This section examines tools and methodologies that can automatically deploy and orchestrate machine learning on FPGA systems in larger scientific applications. FPGAs are a particularly compelling example to explore because the efficiency of the hardware coupled with their programmability makes for an interesting case study in hardware/software codesign.

Traditional software programming is complicated, and parallel high-performance programming is even more challenging. Programming heterogeneous systems that integrate FPGAs bring the challenge to the next level: the programmer must deal with a multi-objective optimization problem that involves performance and costs, i.e., hardware resources. For machine learning applications, a common practice is to profile the application on CPU (or GPU) to identify the bottlenecks to be offloaded onto the reprogrammable logic to improve latency, throughput, or energy efficiency of the application as a whole. Then, part of the application can remain on the CPUs to control the execution and interact with the rest of the scientific setup.

FPGA Programming: FPGA are configurable integrated circuits that provide a good trade-off in terms of performance, power consumption, and flexibility with respect to other hardware paradigms. However, it is a challenging and lengthy task to program FPGAs. FPGA programming has traditionally been a job for hardware designers familiar with digital design and computer architecture. These requirements lead to a steep learning curve for software developers and other domain experts. In order to lower the entry barrier, there has been a growing focus on designing FPGA hardware at a higher level of abstraction. As a result, various approaches have brought FPGA development into the mainstream by allowing developers to design for FPGAs at a higher level using familiar languages such as C, C++, OpenCL, and in some cases, even C# (Singh and Greaves, 2008). Here an important question arises: what are the additional advantages of designing the hardware at a higher level of abstraction? High-level languages (HLLs) include various constructs and design patterns that are more functionally expressive. Furthermore, the amount of time spent in the verification of the design is also a crucial factor. Hardware-description languages such as Verilog or VHDL focus on the final implementation details and, because of that, are more verbose. Bigger code repositories are not easy to verify for functional correctness. On the other hand, HLLs are more compact and simulate faster. Thus, a designer can do more verification in the same span of time. Despite these advances, FPGA programming remains complex. This has compelled academia and industry to develop new compilers, frameworks, and libraries to facilitate hardware design.

High-Level Synthesis and Languages: High-level synthesis (HLS), also known as behavioral or algorithmic synthesis, is an automated design process that takes as input a functional description of a design and outputs an RTL implementation. It transforms an untimed (or partially timed) high-level specification into a fully timed implementation. The process of HLS starts by analyzing the data dependencies between the various operations in the functional description. This analysis leads to a Data Flow Graph (DFG) representation. After the DFG generation, during the allocation phase, HLS maps each operation onto a hardware resource with latency and area characteristics. Then, HLS adds the notion of time to the design during the scheduling phase. Scheduling takes the operations and resources of the DFG and decides in which clock cycle to execute them, given their latency information. This step infers sequential logic by adding registers between operations and creating finite state machines (Fingeroff, 2010).

Over the past three decades, many HLS tools have been proposed. The work in Nane et al. (2016) presents an evaluation of different academic and commercial HLS tools tested on the same set of benchmarks. These tools have different input languages, perform different internal optimizations, and produce different quality results, even for the same input languages. The results show that each HLS tool can significantly improve performance once the designer has mastered benchmark-specific optimizations and constraints. However, academic HLS tools have a higher learning curve because of a minor focus on usability. Commercial HLS tools have an advantage because of their better documentation, robustness, and design verification integration.

In terms of input languages for HLS, most of the HLLs are variants of the C language. However, there are a few limitations to generate hardware from a pure C specification. First, C lacks the notion of timing and concurrency. The designer must rely on the HLS tool to create clock-based timing. Similarly, the designer must specify the concurrency model or rely on HLS to extract the parallelism among operations or processes. Second, C lacks bit-accurate data types. It only provides “native” data types such as char, int, and long, whose size is a multiple of a byte. Third, it lacks the concepts of hardware interfaces and communication channels. SystemC was adopted as HLS language to address all of these limitations (Ren, 2014). However, SystemC still has not entirely made inroads in the FPGA community. Another common problem with all C-based languages, including SystemC, is memory access and modeling. These languages have a flat memory model, and memory access is done through pointers. Either HLS has to decide how to implement the memories in hardware, or the designer must leverage additional HLS directives or libraries to model the memory sub-system properly. Finally, in the family of the C-based specification languages for HLS, the SYCL language is emerging. SYCL (pronounced sickle) is an industry-driven standard that adds parallelism to C++ to design heterogeneous systems. SYCL programs perform best when paired with SYCL-aware C++ compilers such as the open-source data-parallel C++ (DPC++) compiler (Reinders et al., 2020).

Apart from the variations of C, Bluespec is an open-source language for the description and synthesis of hardware based on SystemVerilog. It provides levels of abstraction with a clean semantic that highlights aspects of the architecture. It can be considered a high-level functional HDL, where modules are implemented as rules using SystemVerilog syntax. Those rules are called guarded atomic actions and express behaviors as concurrently cooperating finite state machines (FSMs). Another recent language among FPGA designers is Chisel. It is based on Scala and supports hardware definition using highly parameterized generators, object-oriented and functional programming. Similar to an HLS flow, it compiles into an RTL Verilog implementation.

Although all these languages have helped create efficient hardware and significantly shorten the development time, specific coding techniques are still necessary. Also, the growth and diversification of the application domains have shown the limitations of these programming languages. This has further pushed the level of abstraction to domain-specific languages (DSLs). In recent years, we are observing the growth of a considerable corpus of DSLs and frameworks for FPGA designs (Papadimitrioua et al., 2012; Kapre and Bayliss, 2016). In a DSL-based approach, the users and the tools can use domain knowledge to apply static and dynamic optimizations. However, a domain-specific HLS tool requires an appropriate compiler and a development environment that caters to the target domain. Table 5 shows some of the DSLs and frameworks developed over the years for FPGA computing organized by domains of application. Although all the approaches in the table are diverse in terms of applications, the interesting question is, what are the common denominators? To the best of our knowledge, most of the approaches are broadly based on two approaches: either the DSL specification gets directly compiled into the RTL implementation, or the approach leverages source-to-source compilers. In the latter case, the DSL compiler produces an equivalent source code in a different programming language, for example, C++, for a more standard HLS flow. As a final concluding remark for this paragraph, the efforts for designing better HLS compilers and languages are a significant part of present FPGA research. Furthermore, the work in Table 5 by no means is an exhaustive list. The area of DSLs for FPGA easily outnumbers the work presented in the table.

TABLE 5
www.frontiersin.org

Table 5. A brief taxonomy of domain-specific languages and frameworks for FPGA applications.

Software and Hardware Integration: Running an application as software on a microprocessor is more accessible than designing and running specialized hardware, but it may result in poor performance and higher power costs. On the other hand, partitioning an application into software and hardware components is challenging. This process, also known as hardware/software codesign, divides an application between software running on the microprocessor and one or more custom hardware or co-processors components to achieve desired performance goals. Understandably there exists a plethora of research work in this area. The authors in Todman et al. (2005) have provided background information on notable aspects of older FPGA technologies and simultaneously explained the fundamental architectures and design methods for codesign. Furthermore, the work in Choi et al. (2019) is another comprehensive study that aims to evaluate and analyze the microarchitectural characteristics of state-of-the-art CPU-FPGA platforms in depth. That paper covers most of the shared-memory platforms with detailed benchmarks.

The two leading FPGA vendors, Xilinx and Intel, have their own solutions. The Xilinx Runtime Library (XRT) (Xilinx, 2021) is implemented as a combination of userspace and kernel driver components. It supports both PCIe-based boards and MPSoC based embedded platforms. Similarly, Xilinx SDSoc (Xilinx, 2020b) and SDAccel (Xilinx, 2020a) became publicly available later in late 2015; the former works only on select boards of the Zynq family of FPGAs, the latter only on selected PCIe-based boards for OpenCL computing. Since 2020 Xilinx has introduced Vitis (Xilinx, 2020c) as a unified platform. Vitis Unified Software Platform is a comprehensive development environment to build and seamlessly deploy accelerated applications on Xilinx platforms, including on-premises Alveo cards, FPGA-instances in the cloud, and embedded platforms. In addition, the recent efforts of Xilinx under the flagship Versal (Xilinx, 2020d) is also a step toward codesign applications. Intel has the Open Programmable Acceleration Engine (OPAE) (Enno et al., 2020) which is the API library for programmers writing host applications that will leverage the FPGA acceleration. Likewise, Intel oneAPI (Intel, 2020) is an open, unified programming model built on standards to simplify the development and deployment of data-centric workloads across CPUs, GPUs, FPGAs, and other accelerators.

Apart from vendor solutions, academia and the open-source community have also attempted to simplify the integration of applications, operating systems, and hardware acceleration. For a comprehensive analysis, the reader is referred to the works in Eckert et al. (2016) and King et al. (2015), which give a historical review and summary on ideas and key concepts to include reconfigurable computing aspects in operating systems. They also present an overview of published and available operating systems of the last 30 years targeting reconfigurable computing. Similarly, the design exploration and engineering of FPGA drivers that are portable across multiple physical interfaces (PCIe, Ethernet, optical links) have remained a significant part of HW/SW codesign research. The challenges come from the variety of FPGA boards, the plethora of interfaces, and the diverse user requirements. Fundamentally, the FPGA drivers should allow the designer to load or reconfigure an application bitstream and support data transfers between the FPGA and host.

A significant engineering challenge is to consider how to partition driver functionality between the hardware and software components. One growing research focus is to exploit the spatial parallelism of FPGA technology through implementing multiple queues on FPGA drivers. A thorough analysis of system-level drivers for FPGA is out of the scope of our white paper. Readers interested in FPGA system-level drivers are referred to the work in Vipin et al. (2013) and Jacobsen et al. (2015). The authors of those papers have provided benchmarks of various mainstream academic and vendor solutions regarding system-level drivers in the FPGA domain.

Despite various existing OS and driver solutions, an open problem that remains is standardization. An industry-wide standardization would allow for faster development and better portability, and (re)usability of FPGA applications. There is already ongoing work in this area. Standards like the CCIX consortium (CCIX, 2020) and the Heterogeneous System Architecture (HSA) foundation (HSA, 2020) have already made good progress.

The Case for ML Frameworks for FPGA Design: Machine learning is one of the fastest growing application domains and over the years there has been an increasing demand for FPGA-based implementations, as the FPGA can achieve latency and throughput and efficiency requirements through extreme customization of the hardware design leveraging reduced precision arithmetic, streaming dataflow implementations (as were introduced as spatial architectures), and fine-granular sparsity. In order to enable a broad spectrum of users with these customizations and to reduce the significant engineering effort, compilers and tools are needed that cater to the needs of ML researchers and domain experts working with FPGAs. Two main ML frameworks are making the effort to fill this vacuum: hls4ml and FINN. Considering the aforementioned tools, compilers, programming languages, and codesign solutions, both hls4ml and FINN have the potential to reach a broader scientific community. To get a better understanding of how such a tool flow works, we consider the FINN compiler in more detail in the following paragraphs.

The FINN compiler (Umuroglu et al., 2017) is an open-source framework to generate spatial DPU or streaming dataflow accelerators on FPGAs. The FINN compiler has a highly modular structure as shown in Figure 9, which allows the user to interactively generate a specialized architecture for a specific DNN. The framework provides a frontend, transformation and analysis passes, and multiple backends to explore the design space in terms of resource and throughput constraints. Brevitas (Alessandro et al., 2020), a PyTorch library for quantization-aware training, is the frontend used in this work. It enables training DNNs with weights and activations quantized down to a few bits, then exports the trained network into the intermediate representation (IR) used by the FINN compiler. The transformation and analysis passes help to generate an efficient representation of the DNN. Finally, the backend contains a code generator that creates synthesizable accelerator descriptions, which can be implemented as either a standalone Vivado IPI component or integrated into various shells, including Xilinx Alveo boards and PYNQ embedded platforms.

FIGURE 9
www.frontiersin.org

Figure 9. FINN compiler flow.

For further processing, the DNN model must be converted into the IR of the FINN compiler first. The frontend stage takes care of this by converting the PyTorch description into the IR, called FINN-ONNX. This IR is based on ONNX (Bai et al., 2019), an open-source interchange format that uses a protobuf description to represent DNNs. It comes with several standard operators and allows the user to easily create their own operators to customize the model. The nodes represent layers and edges carry outputs from one layer to become inputs to another. The feature to customize the ONNX representation is used in the framework to add application-specific nodes and attributes. Each node is tagged with the quantization of its inputs, parameters (weights and activations), and outputs to enable quantization-aware optimizations and the mapping to backend primitives optimized for quantized computation. During the compiler flow the nodes will be transformed into a backend-specific variants via a series of transformation passes.

The main principle of the FINN compiler is graph transformation and analysis passes, which change or analyze the IR of the model. A pass is a function that takes the IR graph as input and either (a) transforms the DNN by looking for a certain pattern, changing the graph in a specific manner and outputs the modified graph, or (b) analyzes the DNN to produce metadata about its properties. To bring the model into a representation from which code can be produced and finally the hardware accelerator can be generated, various transformations must be applied. The main transformations involved are summarized below.

Although the PyTorch description of the network is mostly quantized, it may still contain some floating-point operations from e.g., preprocessing, channelwise scaling or batchnorm layers. In order to generate a hardware accelerator from the model, these floating-point operations must be absorbed into multi-level thresholds, so that a functionally identical network of integer operations is created. The transformation to achieve this is called streamlining, as described by Umuroglu and Jahre (Umuroglu and Jahre, 2017). During streamlining, floating-point operations are moved next to each other, collapsed into a single operation, and absorbed into succeeding multi-thresholding nodes.

Next, high-level operations in the graph are lowered to simpler implementations that exist in the FINN HLS-based hardware library. For instance, convolutions will be lowered to a sliding window node followed by a matrix-vector node, while pooling operations will be implemented by a sliding window followed by an aggregation operator. The resulting graph now consists of layers that can be converted to hardware building block equivalents. Each node corresponds to a Vivado HLS C++ function call, from which an IP block per layer can be generated using Vivado. The resources utilized by each hardware building block can be controlled through specific attributes passed from FINN to Vivado. For example, multiplications can be performed with LUTs or DSP blocks, and parameters can be stored in distributed, Block, or Ultra RAM.

Finally, the folding process assigns compute resources to each layer to obtain the desired throughput with a balanced pipeline by fine-tuning their degree of parallelism. To enable per-layer specialization without reconfiguration and minimize latency, FINN creates dedicated per-layer hardware interconnected with FIFO channels, thus the outermost loop across L layers is always fully pipelined. Once the folding is specified, resource estimates can be produced for each node. There are several ways to estimate the resources. Even before IP blocks are generated from the HLS layers, an estimate of the resources per layer can be made by using analytical models based on the concepts from the FINN-R paper (Blott et al., 2018). Estimations can also be extracted from Vivado HLS after IP generation, though these results are still estimations that may differ from the resource usage of the final implementation due to synthesis optimizations.

The Backend is responsible for consuming the IR graph and backend-specific information to create a deployment package, also implemented using the transformation concept. To get the inference accelerator, between the layers FIFOs are inserted, which can be sized automatically by the FINN compiler. Afterwards, the single IP blocks are stitched together and synthesized. The stitched IP can be manually integrated into a system, or inserted into an appropriate shell for the target platform. If the target platform is an Alveo card, the design is exported as a Vivado Design Checkpoint (DCP), followed by generation of Xilinx Vitis (Kathail, 2020) object files and linking.

Summary of Hardware/Software Codesign and FPGA-Based Systems: In summary, CPUs are the most general solution for CNN inference but high in power. GPUs and DPUs offer highest performance, whereby GPU are more expensive in regards to energy cost. FPGAs offer several tradeoffs that may well fit rapidly moving application domains. FPGAs can adopt any precision and numerical representation, which provides utmost flexibility and leverages optimization with quantization to the maximum, whereas hardened approaches need to default to the next higher supported precision where the reduced precision variable can be embedded. Furthermore, through the spatial dataflow approach, much lower latency can be achieved. However, the complexity of programming FPGAs limits their deployment. Tools such as hls4ml and FINN are frameworks specifically created for the ML domain where they automate the process of hardware generation for the end-user thus hiding the associated design complexity of FPGAs and enabling them for the previously discussed end applications.

4.5. Beyond-CMOS Neuromorphic Hardware

With rapidly growing machine learning applications comes the acute need for their efficient hardware implementations. Most of the efforts are focused on digital CMOS technology, such as implementations based on general-purpose TPUs/GPUs, FPGAs, and more specialized ML hardware accelerators. The steady improvements in such hardware platforms' performance and energy efficiency over the past decade are attributed to the use of very advanced, sub-10-nm CMOS processes and holistic optimization of circuits, architectures, and algorithms. It includes, for example, taking advantage of aggressive voltage supply scaling (Moons et al., 2017), very deep pipelines and extensive data reuse in architectures (Chen et al., 2017), and lowering the precision of weights and activations of the algorithms (Simons and Lee, 2019). As a result, very compact state-of-the-art neural networks, such as MobileNet based on 3.4M parameters and 300M multiply-and-add operations per inference (Sandler et al., 2018), can now be fitted entirely on a single chip. However, on all these fronts, advances are saturating and cannot rely on the faltering Moore's law.

On the other hand, further progress would be essential because ML algorithms are getting increasingly more complex. For example, transformer networks (Vaswani et al., 2017), the state-of-the-art approach for many ML tasks today (Vaswani et al., 2017; Vinyals et al., 2019; Dosovitskiy et al., 2020), could have hundreds of billions of parameters and perform hundreds of trillions of operations per inference. Moreover, the transformer's functional performance typically improves with the model size (Brown et al., 2020; Rajbhandari et al., 2020). Training such models requires enormous, data-center-scale (e.g., kiloTPU-year) resources while performing inference on resource-constrained edge devices would be extremely challenging.

The opportunities for building more efficient hardware may come from biological neural networks. Indeed, it is believed that the human brain, with its >1000 × more synapses than the weights in the largest transformer network, is extremely energy efficient (Hasler and Marr, 2013), which serves as a general motivation for developing neuromorphic hardware (Mead, 1990). There is a long history of CMOS neuromorphic circuits (Indiveri et al., 2011). However, unleashing the full potential of neuromorphic computing might require novel, beyond-CMOS device and circuit technologies (Berggren et al., 2020) that allow for more efficient implementations of various functionalities of biological neural systems.

In this section, the most prominent emerging technology proposals, including those based on emerging dense analog memory device circuits, are grouped according to the targeted low-level neuromorphic functionality - see, e.g., reviews in Burr et al. (2017), Bavandpour et al. (2018), Yang et al. (2013), and Yu (2018) and original work utilizing volatile (Ohno et al., 2011; Pickett et al., 2013; Chu et al., 2014; Sheridan et al., 2017; Wang et al., 2017, 2018c; Adda et al., 2018; Lashkare et al., 2018; Zhang et al., 2018b; Cai et al., 2019a; Yeon et al., 2020) and nonvolatile (Mahmoodi et al., 2009, 2019; Alibart et al., 2012; Govoreanu et al., 2013; Prezioso et al., 2015, 2016, 2018; Li et al., 2016a; Adam et al., 2017; Pedretti et al., 2017; Bayat et al., 2018; Hu et al., 2018b; Wang et al., 2018c; Kim et al., 2019; Cai et al., 2020a; Lin et al., 2020b; Liu et al., 2020b; Yao et al., 2020a) memristors, phase change memories (PCM) (Kuzum et al., 2011; Burr et al., 2015; Tuma et al., 2016; Ambrogio et al., 2018; Ríos et al., 2019; Joshi et al., 2020; Karunaratne et al., 2020), and nonvolatile NOR (Bayat et al., 2015; Guo et al., 2017a,b; Mahmoodi et al., 2019), and NAND (Bavandpour et al., 2019, 2020; Lee et al., 2019), and organic volatile (Fuller et al., 2019) floating gate memories, as well as multiferroic and spintronic (Sengupta et al., 2016; Ni et al., 2018; Ostwal et al., 2018; Romera et al., 2018; Grollier et al., 2020), photonic (Bruiner et al., 2013; Vandoorne et al., 2014; Tait et al., 2016; Buckley et al., 2017; Shen et al., 2017; Feldmann et al., 2019; Hamerly et al., 2019; Hamley et al., 2019; Lin et al., 2019; Ríos et al., 2019; Goi et al., 2020; Shasti et al., 2021), and superconductor (Buckley et al., 2017; Segall et al., 2017; Rowlands et al., 2021) circuits. More discussion is devoted to analog vector-by-matrix multiplication circuits in the following subsection because of their immediate value for today's state-of-the-art algorithms. More biologically-realistic proposals described in the subsequent sections are less emphasized because they target algorithms with inferior performance. The least mature though very intriguing quantum neuromorphic computing (Markovich and Grolier, 2020; Yamamoto et al., 2020) is not discussed in this brief review.

Analog Vector-By-Matrix Multiplication: The emergence of dense analog-grade nonvolatile memories in the past two decades renewed interest in analog-circuit implementations of vector-by-matrix multiplication (VMMs) (Widrow and Angel, 1962; Mead, 1990; Holmes et al., 1993; Chawla et al., 2004; Alibart et al., 2012; Bayat et al., 2015; Guo et al., 2017b), which is the most common and frequently performed operation of any neural network in training or inference (Hertz et al., 1991; Gerstner and Kistler, 2002). In the simplest case, such a circuit is comprised of a matrix of memory cells that serve as configurable resistors for encoding the matrix (synaptic) weights and peripheral sense amplifiers playing the role of neurons (Figure 10). The input vector is encoded as voltages applied to rows of the memory matrix so that the currents flowing into virtually grounded columns correspond to VMM results. Because addition and multiplication are performed on the physical level, via Kirchhoff's and Ohm's laws, respectively, such an approach can be extremely fast and energy-efficient, provided that memory devices are dense and their conductances are adjustable (i.e., multi-state). The energy efficiency in part comes from performing “in-memory" computing that reduces the amount of data (corresponding to the synaptic weights) that are moved across or in-and-out of the chip during computation. Such communication overhead could dominate the energy consumption in the most advanced digital CMOS implementations.

FIGURE 10
www.frontiersin.org

Figure 10. Analog vector-by-matrix multiplication (VMM) in a crossbar circuit with adjustable crosspoint devices. For clarity, the output signal is shown for just one column of the array, while sense amplifier circuitry is not shown. Note that other VMM designs, e.g., utilizing duration of applied voltage pulses, rather than their amplitudes, for encoding inputs/outputs, are now being actively explored see, e.g., their brief review in Bavandpour et al. (2018).

The general challenge toward practical adoption of such circuits, especially when using the most prospective emerging memory technologies, is variations in I-V characteristics, e.g., in the switching voltages applied to change the memory state. In light of this challenge, the most straightforward application is ex-situ trained inference accelerators for the earlier firing-rate neural networks (Bavandpour et al., 2018), i.e., the so-called second generation of artificial neural networks (ANNs) with graded-response neurons. In such applications, memory devices are updated infrequently, only when new inference functionality should be programmed. Thus, crosspoint devices' conductances can be tuned with slower, more tolerant to device variations write schemes. For example, after the weights have been found in the software, memory cells are programmed, one by one, using feedback write-verify algorithms that can adapt to the unique I-V characteristics of each device (Alibart et al., 2012). For the same reason, the switching endurance, i.e., the number of times the memory devices can be reliably programmed, and the write speed/energy are less critical. Additionally, VMM operations in the inference of many neural networks could be performed with moderate, less than 8-bit precision, without incurring accuracy loss (Yang and Sze, 2019), which further relaxes requirements for analog properties and permits more I-V non-idealities and noise.

The most advanced neuromorphic inference circuits have been demonstrated with more mature floating-gate transistor memory circuits. Up until recently, such circuits were implemented primarily with “synaptic transistors" (Diorio et al., 1996), which may be fabricated using the standard CMOS technology, and several sophisticated, efficient systems were demonstrated (Chawla et al., 2004; Hasler and Marr, 2013; George et al., 2016). However, these devices have relatively large areas (>103 F2, where F is the minimum feature size), leading to higher interconnect capacitance and hence larger time delays. More recent work focused on implementing mixed-signal networks with much denser (~40 F2) commercial NOR-flash memory arrays redesigned for analog computing applications (Bayat et al., 2015; Guo et al., 2017b). For example, a prototype of a 100k+-cell two-layer perceptron network fabricated in a 180-nm process with modified NOR-flash memory technology was reported in Guo et al. (2017a). It performed reliably, with negligible long-term drift and temperature sensitivity, and reproducible classification of the MNIST benchmark set images with ~95% fidelity and sub-1-μs time delay and sub-20-nJ energy consumption per pattern. The energy-delay product was six orders of magnitude better than the best (at that time) 28-nm digital implementation performing the same task with a similar fidelity (Guo et al., 2017a).

Recent theoretical studies showed that neuromorphic inference circuits could be also implemented with much denser 3D-NAND flash memories (Bavandpour et al., 2019, 2020; Lee et al., 2019), projected to scale eventually to 10 terabits per square inch density. In the long term, the most promising are perhaps circuits based on metal-oxide resistive switching random access (ReRAM for short, which are also called metal-oxide memristors) (Yang et al., 2013; Yu, 2018), especially their passively integrated (0T1R) technology variety (Kim et al., 2019). Indeed, due to the ionic switching mechanism, ReRAM devices with dimensions below 10 nm still retain excellent analog properties and year-scale retention (Govoreanu et al., 2013). Furthermore, a low-temperature fabrication budget allows monolithic vertical integration of multiple ReRAM crossbar circuits, further increasing effective density (Adam et al., 2017). There has been rapid progress in scaling up the complexity of ReRAM-based neuromorphic circuit demonstrations over the past several years (Prezioso et al., 2015; Bayat et al., 2018; Hu et al., 2018b; Kim et al., 2019; Lin et al., 2020b; Liu et al., 2020b; Yao et al., 2020a). However, the ReRAM technology is still in much need of improvement. In addition to high device variations, another remaining issue is high write currents and operating conductances, which must be decreased by at least one order of magnitude to reduce the significant overhead of peripheral circuits (Kim et al., 2019).

The device requirements for training hardware accelerators are different and much more stringent. For instance, long retention is not required because weights are frequently updated. That allows using volatile memories in analog VMM circuits, such as interfacial memristors based on electron trapping/detrapping switching (Chu et al., 2014; Sheridan et al., 2017; Cai et al., 2019a) and solid-state-electrolyte memories (Fuller et al., 2019; Berggren et al., 2020; Yeon et al., 2020), or even capacitor-based memories controlling current via crosspoint transistors (Ambrogio et al., 2018). However, the toughest challenge is much higher computing and weight precision required for training operation and the need for efficient schemes for weight updates, which in turn necessitate drastically tighter device variations. The additional related requirement is that the change in device conductance upon applying the write pulse should not depend on its current state (the so-called linearity of update property). Otherwise, accurate conductance adjustment would require sending a unique write pulse based on the current device state, which would be hardly compatible with fast (in parallel) weight update.

Phase change memories have also been investigated as candidates for variable resistors in analog VMM circuits (Burr et al., 2015; Joshi et al., 2020), though their main drawback is significant drift in the conductive state over time. High write endurance, high density (with vertical 3D-NAND-like integrated structure), and long retention are demonstrated in 1T Ferroelectric RAM devices. There is much excitement about such devices' applications in training and inference accelerators (Ni et al., 2018), though their analog properties are probably inferior to ReRAM. The significant drawbacks of magnetic devices, such as magnetic tunnel junction memories, are smaller on/off current ratios, insufficient for practical VMM circuits, and poor analog properties for scaled-down devices (Grollier et al., 2020).

The potentials of using light for implementing fast and large-fanout interconnect and linear computations, such as multiply-and-add operation, have motivated photonic neuromorphic computing research (Hamley et al., 2019; Berggren et al., 2020; Goi et al., 2020; Shasti et al., 2021). Different implementation flavors, e.g., with fixed (Lin et al., 2019) and programmable (Tait et al., 2016; Shen et al., 2017; Hamerly et al., 2019; Ríos et al., 2019) functionalities, have been recently suggested in the context of modern neural networks. Specifically, Lin et al. (2019) reports a system of multiple 3D-printed optical layers, each being a mesh of regions (neurons) with specifically chosen transmission-reflection properties, which can perform pattern classification inference similar to the convolutional neural networks. By sending a coherent light with amplitude-encoded input, a useful computation is performed at the speed of light. Specifically, the light diffracts and interferes when passing through the optical system and is ultimately steered to the specific region at the output layer corresponding to the pattern class. Ríos et al. (2019), Hamerly et al. (2019), Shen et al. (2017), and Tait et al. (2016) report optical neuromorphic systems with configurable weights. The inputs are encoded in the light's energy, and the weights are encoded by optical attenuation in PCM devices in Ríos et al. (2019) so that a product is computed by passing the light via PCM device. Tait et al. (2016) proposes encoding inputs with light amplitude and uses specific frequency for different VMM inputs. The light from inputs is combined and passed to the frequency selective weight banks based on a microring resonator (MRR) that features metal heaters to perform multiplication. In particular, the MRR coupling (i.e., weight) is controlled via heating by adjusting currents supplied to each MRR. In these reconfigurable implementations, the product accumulation (i.e., the summation operations in the VMM) is performed by integrating the light-induced charges on the photodetector. A very aggressive time-division multiplexing scheme for calculating VMM in which both weights and inputs are encoded in the coherent light's amplitude is proposed in Hamerly et al. (2019). At one step of such scheme, the input light is fanned out into n channels and combined with the light-encoded n weights using a beam splitter and then sent to n homodyne photodetectors to compute n products in parallel. All-optical feed-forward inference based on Mach-Zehnder interferometer meshes utilizes single-valued decomposition for the weight matrix (Shen et al., 2017). Unitary matrix transformations are implemented with optical beam splitters and phase shifters, while the diagonal matrix is implemented with optical attenuators.

In principle, sub-aJ energy and sub-ps latency for a single multiply-and-add operation might be possible with optical computing (Hamley et al., 2019). However, the main challenge remains much large dimensions of the optical components and the very high I/O overhead of converting to and from optical domains (Hamley et al., 2019; Berggren et al., 2020; Shasti et al., 2021). The designs that rely on conversion to the electrical domain would be especially affected by poor integration density of optical devices due to larger electrical communication overheads, which were shown to overwhelm system-level performance of (much denser) ReRAM based circuits (Bavandpour et al., 2018). Optical systems would ultimately benefit from very wide (≫10,000) dot-products and/or utilizing deep time-division multiplexing to amortize the I/O overhead. However, the possible issues of nonlinearities in charge integration and utility of such wide dot-product computations remain unclear (Hamley et al., 2019).

Stochastic Vector-by-Matrix Multiplication: Computations performed by the brain are inherently stochastic, in that, e.g., substantially different neural responses are observed to the repeatable presentation of identical stimuli (Rolls and Deco, 2010). Such noisy operation is mimicked by probabilistic neural networks, such as Boltzmann machines (Hinton and Sejnowski, 1983) and deep belief neural networks (Hinton, 2009). In the simplest case, such a network is comprised of binary neurons that compute stochastic dot products, i.e., probabilistically generate output according to their pre-activation (dot-product) values.

The stochastic functionality can be realized at either the synapse or the neuron side. In the latter, more straightforward scenario, the neuron first computes a dot-product of its inputs and corresponding weights deterministically. The result is then passed to some “probabilistic" activation function, e.g., used as an argument in the sigmoid probability function, to determine the probability of generating high output. Because of the typically large (> 100) ratio of synapses to neurons, the efficient deterministic dot-product implementations, e.g., with the already discussed analog VMM circuits, is of primary importance for realizing high-performance probabilistic neural network hardware. Still, earlier work showed that even the simplest, deterministic neurons may incur substantial overhead, e.g., occupy up to 30% of the area and consume up to 40% of energy for some neural network models (Bavandpour et al., 2018). Hence neuromorphic hardware would also benefit from the efficient realization of stochastic neurons.

Emerging devices can be broadly employed in two ways to achieve stochastic functionality, namely by using either dynamic or static I-V characteristics of memory devices. Specifically, the former approach is to utilize intrinsically stochastic switching between memory states in emerging memory devices. For example, in MTJ memories, thermal fluctuation causes stochastic transition between the low resistance parallel and high resistance antiparallel states so that the probability of the final memory state upon switching could be controlled by the spin-torque current (Grollier et al., 2020). The melt-quench-induced reconfiguration of the atomic structure is intrinsically stochastic in phase-change memories (PCMs) (Tuma et al., 2016). These phenomena were suggested for implementing MTJ (Ostwal et al., 2018) and PCM (Tuma et al., 2016) stochastic neurons. The second approach is to utilize intrinsic and extrinsic current fluctuations in memory devices, e.g., random telegraph (Cai et al., 2020a) and thermal noise (Mahmoodi et al., 2009) in ReRAM devices, or shot-noise in nanoscale floating gate transistors (Mahmoodi et al., 2009, 2019). In such an approach, the noisy current flowing into the neuron is compared against a reference value, e.g., using a simple latch, to implement a probabilistic activation function (Mahmoodi et al., 2019).

The primary concern for the former approach is the limited endurance of many memories and the drift in the stochastic switching properties upon repeated switching. An additional drawback is a necessity for the co-integration of multiple memory device technologies for scalable stochastic dot-product circuits, e.g., integrating ReRAM-based artificial synapses and MTJ-based neurons. On the other hand, analog circuits based on ReRAM devices only (Figure 10), though operating at a much lower signal-to-noise ratio (SNR), can be utilized to implement stochastic VMM of the second approach. Furthermore, adjusting read voltages in such a circuit allows for controlling SNR. Hence, the control of effective temperature, i.e., the slope of sigmoid probability function, enables efficient implementation of stochastic annealing in Boltzmann machines during runtime. The second approach's possible downside is slower operation because of lower read currents (which can be potentially addressed by utilizing external noise instead Mahmoodi et al., 2019). Finally, the impact of noise quality on functional performance is another common concern. This issue has not been systematically studied yet, though Gaussian-like thermal or shot noise should be more advantageous for truly random operation.

Spiking Neuron and Synaptic Plasticity: Despite much recent progress in algorithms (Neftci et al., 2019; Tavanaei et al., 2019), the most biologically plausible, spiking neural networks (SNNs) (Gerstner and Kistler, 2002) are still inferior in the functional performance to simpler ANNs. If simpler ANNs would remain superior, the work of efficient SNN hardware could still be justified by the need to efficiently interface to the brain and/or model it, which in turn could lead to the development of higher-cognition artificial intelligence algorithms. An additional intriguing feature of SNNs is local weight update rules, requiring only information from pre- and post-synaptic neurons that could enable large-scale neuromorphic hardware with real-time training capabilities (Thakur et al., 2018).

In the simplest SNN models, the information is encoded in spike-time correlations (Gerstner and Kistler, 2002), while the network function is defined by the synaptic weights, which are adjusted based on the relative timing of spikes that are passed via synapses. In addition to VMM, the essential operations in SNNs are leaky-integrate-and-fire (LIF) functions performed by neurons and various types of synaptic plasticity, such as short-term plasticity (STP) and long-term potentiation (LTP), and spike-timing-dependent-plasticity (STDP) (Gerstner and Kistler, 2002). LIF neurons mimic the dynamic processes in the neuronal membrane, while synaptic plasticities mimic learning and memory mechanisms in biological networks. For example, STP is a temporary change in the synaptic strength implementing a short-term memory. Without immediate reinforcement of synaptic weight adjustment, the memory would be lost, i.e., the synaptic weight would relax to the original equilibrium state. On the other hand, the frequently repeated spiking stimulus causes long-term memory, e.g., permanent potentiation via the LTP mechanism. STDP is a time-dependent specialization of Hebbian learning. Its specific goal is to strengthen the synaptic efficiency when pre- and post- synaptic spikes happen in the expected causal temporal order and weaken it otherwise.

A compact implementation of LIF neurons with biological, ms-scale integration times using conventional circuit technology is challenging because of the large capacitors that are required. Leaky integration circuits utilizing volatile memristors (e.g., based on filamentary Zhang et al., 2018b, interfacial Lashkare et al., 2018, and Mott insulator Adda et al., 2018 switching mechanisms) have been suggested to address this problem. In such implementations, the integrated current is encoded with a conductive state of the volatile memory device. Neuron spiking functionality was demonstrated with threshold-switching (volatile) memory devices that feature S-type negative differential resistance (NDR) I-V characteristics (Pickett et al., 2013). This approach's general idea is similar to the oscillator circuits based on S-type (NDR) device connected to a resistor-capacitor circuit (Kesim, 2019). LIF neurons based on spin-torque magnetic memories were simulated in Sengupta et al. (2016). In such a neuron, spin-torque oscillations are employed to generate spikes, while incremental magnetization and its relaxation mimic integration and leakage, respectively.

STP to LTP transition has been emulated with solid-state-electrolyte devices see, e.g., original work in Ohno et al. (2011) and more recent work on “diffusive" memristors (Wang et al., 2017). Specifically, the short and infrequent write pulses result in the formation of thin filaments, which are unstable and quickly dissolve, representing a short memory. However, a thicker and more stable filament can be formed by applying repeated and/or longer write pulses, thus mimicking transition to the LTP. Different STDP window implementations, e.g., using PCM (Kuzum et al., 2011) or metal-oxide ReRAM (Prezioso et al., 2016) devices, have been suggested by carefully selecting the shape of pre and post-synaptic write voltage pulses—see a comprehensive review of the emulated synaptic plasticity with memristive devices in Serrano-Gotarredona et al. (2013) and Saighi et al. (2015).

Several small-scale spiking neuromorphic systems based on emerging device technologies were demonstrated, including coincidence detection via STDP mechanism based on metal-oxide memristors (Pedretti et al., 2017; Prezioso et al., 2018) and temporal data classification with diffusive memristors (Wang et al., 2018c). However, the overall progress in such advanced hardware has been much slower compared to simpler ANNs inference accelerators. The main reason is more demanding functionality from emerging devices in such applications and hence the more severe impact of device variations on the SNN operation and performance. For example, SNNs rely on fixed-magnitude spikes to update the conductance of multiple devices in parallel. Because of that, change in the conductances could vary drastically even with minor variations in I-V's switching voltages, which in turn leads to very significant variations in STDP characteristics (Prezioso et al., 2018). On the other hand, as already mentioned above, the implementation of simpler ex-situ trained ANNs is much less challenging because the write amplitude voltages in such networks can be adjusted uniquely for each device based on the feedback information during conductance tuning (Alibart et al., 2012).

Superconductor circuits, e.g., based on rapid single flux quantum (RSFQ) variety (Likharev and Semenov, 1991), are naturally suited for spiking circuits due to information encoding in SFQ voltage pulses. For example, Josephson Junction spiking neurons operating at up to 50GHz range have been demonstrated in Segall et al. (2017). The historical challenges of such an approach include inferior fabrication technology (which may finally change given the enormous investments in superconductor quantum computing), the low-temperature operation that limits its applications, and the lack of efficient analog memory circuits (Likharev, 2012). The photonic spiking neural networks (e.g., Feldmann et al., 2019) and hybrid superconductor/optoelectronic neuromorphic circuits (Buckley et al., 2017) share the same challenges of the already discussed photonic neuromorphic inference approaches.

Reservoir Computing: Due to intrinsic memory properties, recurrent neural networks, such as Google Neural Machine Translation model, are especially suitable for processing sequential or temporal data. Reservoir computing (RC) networks are a special type of efficiently learning recurrent networks (Lukǒevičius and Jaeger, 2009), that were motivated by cortical information processing (Maass et al., 2004). Among its variants are liquid state machines (Maass et al., 2002), which is a spiking RC network, and echo state networks (Jaeger, 2001), an RC based on a very sparse recurrent network. The main component in RC networks is a reservoir, which is a nonlinear recurrent network that maps inputs into a higher-dimensional spatio-temporal representation and has the property of a fading memory of the previous inputs and network states. Another component is a readout layer, which maps the intermediate state to the outputs. All connections in the reservoir are fixed and only weights in the readout layer are trainable. Because of that and sparse intermediate representation, faster and online algorithms can be employed for training such networks, which is a primary strength of this approach.

Though both readout and the reservoir can also be realized with the discussed analog VMM circuits, intriguing opportunities for implementing the reservoir are presented by nonlinear physical phenomena in superconductor, magnetic, and photonic devices (Tanaka et al., 2019). For example, spoken vowel recognition was demonstrated with RC in which the reservoir was implemented with four coupled MTJ-based spin-torque oscillators (STO) (Romera et al., 2018). In such a demo, the temporal input corresponding to spoken vowels is first converted to the frequency domain, which is in turn mapped to the corresponding DC bias currents that are applied to the MTJ devices. The induced voltage on the STO devices is used as an output of the reservoir. The reservoir utilizes the nonlinear dependence of the frequency of STOs on the DC current and the history-dependent transient motions of the MTJ's free layer spins spin.

Various photonic reservoirs have been suggested (Shasti et al., 2021), e.g., utilizing transient properties of optical systems with time-delayed feedback (Bruiner et al., 2013), or relying on superimposing lights that passively circulates via waveguides, splitters and combiners, and nonlinear conversion to the electronic domain (Vandoorne et al., 2014), to achieve high-dimensional response. The dynamics in the superconductor circuits are recently studied for efficient and extremely fast reservoir implementation (Rowlands et al., 2021). Specifically, the proposed reservoir is based on a Josephson transmission line (JTL) formed by a chain of biased JJs. An input pulse from one end of the JTL causes a rapid cascade of junction phase slips that propagate SFQ pulse to the other end. Because JJs modulate each others' currents, a complex dynamical state is achieved.

There are several general concerns with RC computing approaches. On the algorithmic level, RC is inferior in performance to state-of-the-art approaches and it is unclear whether without further algorithm improvements such a handicap can be outweighed by the advantages of online training. The main concern for various hardware implementations is again related to the device variations, e.g., whether the hardware would be able to produce repeatable results when applying the same input. An additional concern for magnetic devices is the limited coupling between devices which could impact the effectiveness of the reservoir.

Hyperdimensional Computing/Associative Memory: Hyperdimensional computing (Kanerva, 2009) circuits have been recently demonstrated with ReRAM (Li et al., 2016a) and PCM (Karunaratne et al., 2020) devices. The low-level operation in hyperdimensional computing is closely related to that of associative or content addressable memory (Hertz et al., 1991). Specifically, at the core of such an approach is an associative memory array circuit that outputs the closest, in a Hamming distance sense, memory row entry to a binary input vector serving as a search key. Assuming symmetric binary representation, with −1 and +1 encoding, Hamming distance is linearly related to a dot product, i.e., equal to output vector length minus dot product between the input vector and the stored memory row values. Therefore, the critical functionality in hyperdimensional computing is again a VMM operation. After the VMM operation has been completed, its results are passed to the winner-take-all circuit (Hertz et al., 1991) (which is a harder version of a softmax function; Bridle, 1989) that determines the element with the smallest Hamming distance while discarding all other outputs. The additional simplification is that both input and weights in VMM are binary.

In principle, binary VMM can be more efficiently implemented in hardware than its fully analog version. Similar to binary neural networks (Simons and Lee, 2019), the apparent tradeoff is a worse functional performance of hyperdimensional computing. Another essential feature of hyperdimensional computing is the suitability for fast “one-shot" or incremental learning (Kanerva, 2009) though at the cost of having a much more redundant memory array. Note that fast “one-shot” learning is not unique to hyperdimensional computing. For example, Hebbian learning and its many variants used in training associative neural networks have recursive form and are naturally incremental in that the weights can be modified only based on current weight values and the new pattern stored in the network (Hertz et al., 1991).

Concluding Remarks: Many emerging devices and circuit technologies are currently being explored for neuromorphic hardware implementations. Neuromorphic inference accelerators utilizing analog in-memory computing based on floating gate memories are perhaps the closest to widespread adoption, given the maturity of such technology, the practicality of its applications, and competitive performance as compared to conventional (digital CMOS) circuit implementations. Comparing the performance prospects of other neuromorphic approaches is not straightforward because many proposals target algorithms with inferior functional performance, especially those closely mimicking the brain's operation. Baring a substantial breakthrough in ML algorithms or the emergence of new applications that could benefit from high-performance low-accuracy neuromorphic hardware, the inferior functional performance may limit the practicality of other approaches. The main challenge, much more so for advanced neuromorphic computing concepts, remains significant variations in the operation of emerging devices.

5. Outlook

This report has laid out exciting applications of fast ML to enable scientific discovery across a number of domains. This is a rapidly developing area with many exciting new studies and results appearing often. However, this is a relatively young area rich with potential and a number of open challenges across a number of fields. Beyond what has been laid out in the report, we hope that the discussion of scientific use-cases and their overlaps will provide readers with the inspiration to entertain and pursue additional applications.

In Section 4, we provided an overview of techniques for developing powerful ML algorithms that need to be operated in high throughput and low latency environments. This includes both system design and training as well as efficient deployment and implementation of those ML models. Implementation in hardware is discussed under two main categories—current conventional CMOS and more speculative beyond CMOS technologies. In the conventional CMOS case, in light of the end of Moore's Law, the recent emphasis has been focused on advanced hardware architectures designed for ML. We gave an overview of popular and emerging hardware architectures and their strengths and shortcomings. A key area of importance for the multitude of hardware is their codesign of a given ML algorithm for specific hardware including the architecture and programmability of that algorithm. An example of a particularly relevant and important hardware platform is for FPGAs and that is the use-case discussed in Section 4.4. Finally, we concluded with an overview of beyond CMOS technologies which offer exciting and ultra-efficient technologies on which we can implement ML models. While these technologies are speculative, they offer potential orders of magnitude improvement over conventional technologies.

Both ML training and deployment techniques and computer architectures are extremely rapidly moving fields with new works appearing at a pace difficult to keep up with, even for this report. While new methods are being introduced continuously in both spaces, it is particularly important to understand the codesign of new algorithms for different hardware and the ease of use of the tool flows for deploying those algorithms. Innovations here will allow rapid and broad adoption of powerful new ML hardware. In the case of beyond CMOS technologies, these practical considerations are important as well as considering the maturity of the technology, integration into computing architectures, and how to program such devices.

We look forward to revisiting these topics in the near future to see how quickly advances may come in applications, ML techniques, and hardware platforms—and most importantly their confluence to enable paradigm-shifting breakthroughs in science.

Author Contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Conflict of Interest

MB was employed by the company Xilinx Inc.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The handling editor EC is currently organizing a Research Topic with the authors JD, ML, and JN.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

We acknowledge the Fast Machine Learning collective as an open community of multi-domain experts and collaborators. This community was important for the development of this project. The work by AD was supported by the U.S. Department of Energy (DOE), Office of Science, Office of High Energy Physics, under Award No. DE-SC0010129, and the Fast Machine Learning in Science Workshop was financially supported by Southern Methodist University. The work by NT was supported by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the DOE, Office of Science, Office of High Energy Physics and the DOE Early Career Research program under Award No. DE-0000247070. The work by DS was supported by NSF E2CDA grant #1740352. JA acknowledges primary support from our DOE program and secondary support from National Science Foundation under grant TRIPODS + X: RES-1839234Y. The work of DG was supported in part by the National Science Foundation under Grant No. CNS-2003098 and by a gift from Intel Incorporation. YL acknowledges the support of this work from the National Institutes of Health grant of R01HL131750, and National Science Foundation grant of CBET 2039310. The work by MN was supported by the U.S. National Science Foundation under Cooperative Agreement OAC-1836650 and Award No. OAC-1934757. KS is supported by the U.S. Department of Energy and the National Science Foundation. The work by BK is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under the Emmy Noether Grant No. 420484612. The work by MD was supported by Jefferson Science Associates, LLC under Contract No. DE-AC05-06OR23177 with the DOE, Office of Science, Office of Nuclear Physics. We would like to acknowledge community members who have explicitly supported this work: Maria Acosta Flechas (Fermilab), Anthony Aportela (UC San Diego), Thomas Calvet (CPP Marseille), Leonardo Cristella (CERN), Daniel Diaz (UC San Diego), Caterina Doglioni (Lund), Maria Domenica Galati (University of Groningen), Elham E Khoda (University of Washington), Farah Fahim (Fermilab), Davide Giri (Columbia University), Benjamin Hawks (Fermilab), Duc Hoang (MIT), Burt Holzman (Fermilab), Shih-Chieh Hsu (University of Washington), Sergo Jindariani (Fermilab), Iris Johnson (Fermilab), Raghav Kansal (UC San Diego), Ryan Kastner (UC San Diego), Erik Katsavounidis (MIT), Jeffrey Krupa (MIT), Pan Li (Purdue University), Vladimir Loncar (CERN, Institute of Physics Belgrad), Sandeep Madireddy (ANL), Ethan Marx (MIT), Patrick McCormack (MIT) Andres Meza (UC San Diego), Jovan Mitrevski (Fermilab), Mohammed Attia Mohammed (CHEP-FU), Farouk Mokhtar (UC San Diego), Eric Moreno (MIT), Srishti Nagu (Lucknow University), Rohin Narayan (SMU), Noah Paladino (MIT), Adrian Alan Pol (CERN), Zhiqiang Que (Imperial College), Sang Eon Park (MIT), Subramanian Ramamoorthy 28, Dylan Rankin (MIT), Simon Rothman (MIT), Ashish Sharma (IIT Madras), Sioni Summers (CERN), Pietro Vischia (UC Louvain), Jean-Roch Vlimant (Caltech), Olivia Weng (UC San Diego).

Footnotes

1. ^fastmachinelearning.org

2. ^The DAMA/NaI and subsequent DAMA/LIBRA experiment, claim the direct observation of DM particles in the galactic halo (Bernabei et al., 2013), but the results are in tension with negative results from similar experiments (Schumann, 2019).

3. ^In these comparisons, we treat HBM and HBM2 as external memory as it is used in the same way as DDR4 or GDDR memory.

References

Aad, G. (2008). The ATLAS experiment at the CERN large hadron collider. JINST 3:S08003. doi: 10.1088/1748-0221/3/08/S08003

CrossRef Full Text | Google Scholar

Aad, G., Abajyan, T., Abbott, B., Abdallah, J., Abdel Khalek, S., Abdelalim, A. A., et al. (2012). Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC. Phys. Lett. B 716, 1–29. doi: 10.1016/j.physletb.2012.08.020

CrossRef Full Text | Google Scholar

Aarrestad, T., Loncar, V., Ghielmetti, N., Pierini, M., Summers, S., Ngadiuba, J., et al. (2021). Fast convolutional neural networks on FPGAs with hls4ml. Mach. Learn. Sci. Tech., 2, 045015. doi: 10.1088/2632-2153/ac0ea1

CrossRef Full Text | Google Scholar

Aartsen, M., and The IceCube, Fermi-LAT, MAGIC, AGILE, ASAS-SN, HAWC, et al. (2018). Multimessenger observations of a flaring blazar coincident with high-energy neutrino IceCube-170922A. Science 361, eaat1378. doi: 10.1126/science.aat1378

PubMed Abstract | CrossRef Full Text | Google Scholar

Abbott, B., Abbott, R., Abbott, T., Abernathy, M., Acernese, F., Ackley, K., et al. (2016a). Properties of the binary black hole merger gw150914. Phys. Rev. Lett. 116, 241102. doi: 10.1103/PhysRevLett.116.241102

PubMed Abstract | CrossRef Full Text | Google Scholar

Abbott, B., Abbott, R., Abbott, T. D., Acernese, F., Ackley, K., Adams, C., et al. (2017a). Multi-messenger observations of a binary neutron star merger. Astrophys. J. Lett. 848, L12. doi: 10.3847/2041-8213/aa91c9

CrossRef Full Text | Google Scholar

Abbott, B. P., Abbott, R., Abbott, T. D., Abernathy, M. R., Acernese, F., Ackley, K., et al. (2016b). Observation of gravitational waves from a binary black hole merger. Phys. Rev. Lett. 116, 061102. doi: 10.1103/PhysRevLett.116.061102

PubMed Abstract | CrossRef Full Text | Google Scholar

Abbott, B. P., Abbott, R., Abbott, T. D., Acernese, F., Ackley, K., Adams, C., et al. (2017b). Gw170817: observation of gravitational waves from a binary neutron star inspiral. Phys. Rev. Lett. 119, 161101. doi: 10.1103/PhysRevLett.119.161101

PubMed Abstract | CrossRef Full Text | Google Scholar

Abbott, T. M. C., Abdalla, F. B., Alarcon, A., Allam, S., Andrade-Oliveira, F., Annis, J., et al. (2019a). Dark energy survey year 1 results: measurement of the baryon acoustic oscillation scale in the distribution of galaxies to redshift 1. Mon. Not. R. Astron. Soc. 483, 4866–4883. doi: 10.1093/mnras/sty3351

CrossRef Full Text | Google Scholar

Abbott, T. M. C., Allam, S., Andersen, P., Angus, C., Asorey, J., Avelino, A., et al. (2019b). First cosmology results using type ia supernovae from the dark energy survey: constraints on cosmological parameters. Astrophys. J. Lett. 872, L30. doi: 10.3847/2041-8213/ab04fa

CrossRef Full Text | Google Scholar

Abdul Khalek, R., Accardi, A., Adam, J., Adamiak, D., Akers, W., Albaladejo, M., et al. (2021). Science Requirements and Detector Concepts for the Electron-Ion Collider. EIC Yellow Report.

Google Scholar

Abe, K., Abgrall, N., Aihara, H., Ajima, Y., Albert, J., Allan, D., et al. (2011). The t2k experiment. Nuclear Instrum. Methods Phys. Res. A 659, 106–135. doi: 10.1016/j.nima.2011.06.067

CrossRef Full Text | Google Scholar

Abe, T., et al. (2010). Belle II technical design report.

Google Scholar

Abi, B. (2020). Deep Underground Neutrino Experiment (DUNE). Far Detector Technical Design Report, Volume II DUNE Physics.

Google Scholar

Abi, B., et al. (2020b). Supernova neutrino burst detection with the deep underground neutrino experiment.

Google Scholar

Abi, B., Acciarri, R., Acero, M., Adamov, G., Adams, D., Adinolfi, M., et al. (2020a). Neutrino interaction classification with a convolutional neural network in the DUNE far detector. Phys. Rev. D 102, 092003. doi: 10.1103/PhysRevD.102.092003

CrossRef Full Text | Google Scholar

Abratenko, P., et al. (2020). A convolutional neural network for multiple particle identification in the microboone liquid argon time projection chamber.

Google Scholar

Abusalma, F., Ambrose, D., Artikov, A., Bernstein, R., Blazey, G. C., Bloise, C., et al. (2018). Expression of interest for evolution of the Mu2e experiment.

Google Scholar

Accardi, A., Albacete, J. L., Anselmino, M., Armesto, N., Aschenauer, E. C., Bacchetta, A., et al. (2016). Electron ion collider: the next QCD frontier: understanding the glue that binds us all. Eur. Phys. J. A 52, 268. doi: 10.1140/epja/i2016-16268-9

CrossRef Full Text | Google Scholar

Acciarri, R., et al. (2020). Cosmic Background Removal with Deep Neural Networks in SBND.

Google Scholar

Acernese, F., Agathos, M., Agatsuma, K., Aisa, D., Allemandou, N., Allocca, A., et al. (2014). Advanced virgo: a second-generation interferometric gravitational wave detector. Classical Quant. Gravity 32, 024001. doi: 10.1088/0264-9381/32/2/024001

CrossRef Full Text | Google Scholar

Adam, G., Hoskins, B., Prezioso, M., Merrikh-Bayat, F., Chakrabarti, B., and Strukov, D. (2017). 3-D memristor crossbars for analog and neuromorphic computing applications. IEEE Trans Electron. Dev. 64, 312–318. doi: 10.1109/TED.2016.2630925

CrossRef Full Text | Google Scholar

Adamson, P., Aliaga, L., Ambrose, D., Anfimov, N., Antoshkin, A., Arrieta-Diaz, E., et al. (2017). Constraints on oscillation parameters from νe appearance and νμ disappearance in nova. Phys. Rev. Lett. 118, 032012. doi: 10.1103/PhysRevD.98.032012

PubMed Abstract | CrossRef Full Text | Google Scholar

Adarsh, P., Rathi, P., and Kumar, M. (2020). “YOLO v3-tiny: object detection and recognition using one stage improved model,” in 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS) (Coimbatore), 687–694.

PubMed Abstract | Google Scholar

Adda, C., et al. (2018). First demonstration of “Leaky Integrate and Fire” artificial neuron behavior on (V0.95Cr0.05)2O3 thin film. MRS Commun. 8, 835–841. doi: 10.1557/mrc.2018.90

CrossRef Full Text | Google Scholar

Affeldt, C., Danzmann, K., Dooley, K. L., Grote, H., Hewitson, M., Hild, S., et al. (2014). Advanced techniques in GEO 600. Classical Quant. Gravity 31, 224002. doi: 10.1088/0264-9381/31/22/224002

CrossRef Full Text | Google Scholar

Agar, J. C., Naul, B., Pandya, S., van der Walt, S., Maher, J., Ren, Y., et al. (2019). Revealing ferroelectric switching character using deep recurrent neural networks. Nat. Commun. 10, 4809. doi: 10.1038/s41467-019-12750-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Agnese, R., Anderson, A., Aramaki, T., Arnquist, I., Baker, W., Barker, D., et al. (2017). Projected sensitivity of the SuperCDMS SNOLAB experiment. Phys. Rev. D 95, 082002. doi: 10.1103/PhysRevD.95.082002

CrossRef Full Text | Google Scholar

agx (2019). Nvidia ax.

Ahn, S., Hu, S. X., Damianou, A., Lawrence, N. D., and Dai, Z. (2019). “Variational information distillation for knowledge transfer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (Long Beach, CA: IEEE), 9163–9171.

Google Scholar

Ajimura, S., et al. (2017). Technical design report (TDR): searching for a sterile neutrino at J-PARC MLF (E56. JSNS2).

Google Scholar

Akimov, D., et al. (2015). The COHERENT experiment at the spallation neutron source.

PubMed Abstract | Google Scholar

Al Kharusi, S., et al. (2020). SNEWS 2.0: a next-generation supernova early warning system for multi-messenger astronomy.

Google Scholar

Albericio, J., and Judd, P. (2016). Cnvlutin: Ineffectual-neuron-free deep neural network computing. Comput. Arch. News 44, 1–13. doi: 10.1145/3007787.3001138

CrossRef Full Text | Google Scholar

Albertsson, K., et al. (2018). Machine learning in high energy physics community white paper. J. Phys. Conf. Ser. 1085, 022008. doi: 10.1088/1742-6596/1085/2/022008

CrossRef Full Text | Google Scholar

Alessandro, Franco G., and Fraser, N. (2020). Xilinx/brevitas: bnn_pynq-r1.

Alexander, J., Battaglieri, M., Echenard, B., Essig, R., Graham, M., Izaguirre, E., et al. (2016). Dark sectors 2016 workshop: community report.

Google Scholar

Ali, A. A., Hossain, S. M., Hovsepian, K., Rahman, M. M., Plarre, K., and Kumar, S. (2012). “mpuff: automated detection of cigarette smoking puffs from respiration measurements,” in Proceedings of the 11th International Conference on Information Processing in Sensor Networks, 269–280.

Google Scholar

Alibart, F., Gao, L., Hoskins, B., and Strukov, D. B. (2012). High precision tuning of state for memrsitive devices by adaptable variation-tolerant algorithm. Nanotechnology 23, 075201. doi: 10.1088/0957-4484/23/7/075201

PubMed Abstract | CrossRef Full Text | Google Scholar

Alimena, J., Iiyama, Y., and Kieseler, J. (2020). Fast convolutional neural networks for identifying long-lived particles in a high-granularity calorimeter. J. Instrument. 15, P12006–P12006. doi: 10.1088/1748-0221/15/12/P12006

CrossRef Full Text | Google Scholar

Althubaiti, A. (2016). Information bias in health research: definition, pitfalls, and adjustment methods. J. Multidiscip Healthc 9, 211. doi: 10.2147/JMDH.S104807

PubMed Abstract | CrossRef Full Text | Google Scholar

Altmannshofer, W., et al. (2019). The Belle II Physics Book. PTEP, 2019, 123C01. [Erratum: PTEP 2020, 029201 (2020)].

Ambats, I., et al. (1998). The MINOS detectors technical design report.

Ambrogio, S., Narayanan, P., Tsai, H., Shelby, R. M., Boybat, I., di Nolfo, C., et al. (2018). Equivalent-accuracy accelerated neural-network training using analogue memory. Nature 558, 60–67. doi: 10.1038/s41586-018-0180-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Amerio, S., Amoruso, S., Antonello, M., Aprili, P., Armenante, M., Arneodo, F., et al. (2004). Design, construction and tests of the icarus t600 detector. Nuclear Instrument. Methods Phys. Res. A 527, 329–410. doi: 10.1016/j.nima.2004.02.044

CrossRef Full Text | Google Scholar

Amiaux, J., Scaramella, R., Mellier, Y., Altieri, B., Burigana, C., Da Silva, A., et al. (2012). “Euclid mission: building of a reference survey,” in Space Telescopes and Instrumentation 2012: Optical, Infrared, and Millimeter Wave, Vol. 8442 (International Society for Optics and Photonics), 84420Z.

Google Scholar

Amiri, M. M., and Gündüz, D. (2020). Federated learning over wireless fading channels. IEEE Trans. Wireless Commun. 19, 3546–3557. doi: 10.1109/TWC.2020.2974748

PubMed Abstract | CrossRef Full Text | Google Scholar

Ando, K., Ueyoshi, K., Orimo, K., Yonekawa, H., Sato, S., Nakahara, H., et al. (2017). “Brein memory: a 13-layer 4.2 k neuron/0.8 m synapse binary/ternary reconfigurable in-memory deep neural network accelerator in 65 nm cmos,” in VLSI Circuits, 2017 Symposium on (Kyoto: IEEE), C24–C25.

Google Scholar

Antonioli, P., Fienberg, R. T., Fleurot, F., Fukuda, Y., Fulgione, W., Habig, A., et al. (2004). SNEWS: The supernova early warning system. New J. Phys. 6, 114. doi: 10.1088/1367-2630/6/1/114

PubMed Abstract | CrossRef Full Text | Google Scholar

Aprahamian, A., et al. (2015). Reaching for the horizon: The 2015 long range plan for nuclear science.

Ariyaratne, A., Bluvstein, D., Myers, B. A., and Jayich, A. C. B. (2018). Nanoscale electrical conductivity imaging using a nitrogen-vacancy center in diamond. Nat. Commun. 9, 2406. doi: 10.1038/s41467-018-04798-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Armasu, L. (2016). Deep learning on a stick: Movidius' 'fathom' neural compute stick.

Asanovic, K., and Morgan, N. (1991). Experimental Determination of Precision Requirements for Back-Propagation Training of Artificial Neural Networks. Berkeley, CA: International Computer Science Institute.

Google Scholar

Aso, Y., Michimura, Y., Somiya, K., Ando, M., Miyakawa, O., Sekiguchi, T., et al. (2013). Interferometer design of the kagra gravitational wave detector. Phys. Rev. D 88, 043007. doi: 10.1103/PhysRevD.88.043007

CrossRef Full Text | Google Scholar

Astone, P., Cerdá-Durán, P., Di Palma, I., Drago, M., Muciaccia, F., Palomba, C., et al. (2018). New method to observe gravitational waves emitted by core collapse supernovae. Phys. Rev. D 98, 122002. doi: 10.1103/PhysRevD.98.122002

CrossRef Full Text | Google Scholar

ATL (1996). ATLAS Liquid-Argon calorimeter: Technical Design Report. Technical Report, CERN, Geneva.

Aubriet, F., Chaoui, N., Chety, R., Maunit, B., Millon, E., and Muller, J.-F. (2002). Laser ablation mass spectrometry: a tool to investigate matter transfer processes during pulsed-laser deposition experiments. Appl. Surf. Sci. 186, 282–287. doi: 10.1016/S0169-4332(01)00645-6

CrossRef Full Text | Google Scholar

Ayres, D. S., et al. (2007). The NOvA Technical Design Report.

Google Scholar

Bacon, D., Rabbah, R., and Shukla, S. (2013). Fpga programming for the masses: the programmability of fpgas must improve if they are to be part of mainstream computing. Queue 56, 57–63. doi: 10.1145/2436696.2443836

CrossRef Full Text | Google Scholar

Bacon, D. J., Massey, R. J., Refregier, A. R., and Ellis, R. S. (2003). Joint cosmic shear measurements with the Keck and William Herschel Telescopes. Mon. Not. R. Astron. Soc. 344, 673–685. doi: 10.1046/j.1365-8711.2003.06877.x

CrossRef Full Text | Google Scholar

Bacon, D. J., Refregier, A. R., and Ellis, R. S. (2000). Detection of weak gravitational lensing by large-scale structure. Mon. Not. R. Astron. Soc. 318, 625–640. doi: 10.1046/j.1365-8711.2000.03851.x

CrossRef Full Text | Google Scholar

Baehr, S., McCarney, S., Meggendorfer, F., Poehler, J., Skambraks, S., Unger, K., et al. (2019). Low latency neural networks using heterogenous resources on fpga for the belle ii trigger.

Google Scholar

Bagheri, E., Jin, J., Dauwels, J., Cash, S., and Westover, M. B. (2019). A fast machine learning approach to facilitate the detection of interictal epileptiform discharges in the scalp electroencephalogram. J. Neurosci. Methods 326, 108362. doi: 10.1016/j.jneumeth.2019.108362

PubMed Abstract | CrossRef Full Text | Google Scholar

Bai, J., Lu, F., and Zhang, K. (2019). Onnx: Open Neural Network Exchange. GitHub Repository.

Baker, F., Ainsworth, S. R., Dye, J. T., Crammer, C., Thun, M. J., Hoffmann, D., et al. (2000). Health risks associated with cigar smoking. JAMA 284, 735–740. doi: 10.1001/jama.284.6.735

PubMed Abstract | CrossRef Full Text | Google Scholar

Banner, R., Nahshan, Y., Hoffer, E., and Soudry, D. (2018). Post-training 4-bit quantization of convolution networks for rapid-deployment. arXiv preprint arXiv:1810.05723.

Google Scholar

Bartoszek, L., et al. (2014). Mu2e Technical Design Report.

Google Scholar

Bartoszek, L., Barnes, E., Miller, J. P., Mott, J., Palladino, A., Quirk, J., et al. (2015). Mu2e technical design report.

Google Scholar

Battaglia, P. W., Pascanu, R., Lai, M., Rezende, D., and Kavukcuoglu, K. (2016). Interaction networks for learning about objects, relations and physics.

Google Scholar

Bavandpour, M., Mahmoodi, M. R., Nili, H., Bayat, F., Merrikh, F., Prezioso, M., et al. (2018). “Mixed-signal neuromorphic inference accelerators: recent results and future prospects,” in International Electron Device Meeting (IEDM'18) (San Francisco, CA), 20.4.1–20.4.4.

Google Scholar

Bavandpour, M., Sahay, S., Mahmoodi, M., and Strukov, D. (2020). “Mixed-signal vector-by-matrix multiplier circuits based on 3D-NAND memories for neurocomputing,” in Design Automation and Test in Europe (DATE'20), 696–701.

Google Scholar

Bavandpour, M., Sahay, S., Mahmoodi, M. R., and Strukov, D. (2019). 3D-aCortex: An ultra-compact energy-efficient neurocomputing platform based on commercial 3D-NAND flash memories. arxiv preprint, arXiv:1908.02472.

Google Scholar

Bayat, F. M., Guo, X., Om'mani, H., Do, N., Likharev, K., and Strukov, D. (2015). “Redesigning commercial floating-gate memory for analog computing applications,” in International Symposium on Circuits and Systems (ISCAS'15) (Lisbon), 1921–1924.

Google Scholar

Bayat, F. M., Prezioso, M., Chakrabarti, B., Nili, H., Kataeva, I., and Strukov, D. (2018). Implementation of multilayer perceptron network with highly uniform passive memristive crossbar circuits. Nat. Commun. 9, 2331. doi: 10.1038/s41467-018-04482-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Beardsley, A. P., Hazelton, B. J., Sullivan, I. S., Carroll, P., Barry, N., Rahimi, M., et al. (2016). First season MWA EoR power spectrum results at redshift 7. Astrophys. J. 833, 102. doi: 10.3847/1538-4357/833/1/102

CrossRef Full Text | Google Scholar

Bedaque, P., Boehnlein, A., Cromaz, M., Diefenthaler, M., Elouadrhiri, L., Horn, T., et al. (2021). A.I. for nuclear physics. Eur. Phys. J. A 57, 100. doi: 10.1140/epja/s10050-020-00290-x

CrossRef Full Text | Google Scholar

Bedri, A., Li, D., Khurana, R., Bhuwalka, K., and Goel, M. (2020). “Fitbyte: automatic diet monitoring in unconstrained situations using multimodal sensing on eyeglasses,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20 (New York, NY: Association for Computing Machinery), 1–12.

Google Scholar

Beheshtipour, B., and Papa, M. (2020). Deep learning for clustering of continuous gravitational wave candidates. Phys. Rev. D 101, 064009. doi: 10.1103/PhysRevD.101.064009

CrossRef Full Text | Google Scholar

Belayneh, D., Carminati, F., Farbin, A., et al. (2020). Calorimetry with deep learning: particle simulation and reconstruction for collider. Eur. Phys. J. C 80, 688. doi: 10.1140/epjc/s10052-020-8251-9

CrossRef Full Text | Google Scholar

Bellows, P., and Hutchings, B. (1998). “Jhdl-an hdl for reconfigurable systems,” in Proceedings. IEEE Symposium on FPGAs for Custom Computing Machines (Cat. No.98TB100251) (Washington, DC).

Google Scholar

Benaglia, S., Gisbert, V. G., Perrino, A. P., Amo, C. A., and Garcia, R. (2018). Fast and high-resolution mapping of elastic properties of biomolecules and polymers with bimodal AFM. Nat. Protoc. 13, 2890–2907. doi: 10.1038/s41596-018-0070-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Bendavid, J. (2017). Efficient monte carlo integration using boosted decision trees and generative deep neural networks.

Google Scholar

Bengio, Y., Courville, A., and Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35, 1798–1828. doi: 10.1109/TPAMI.2013.50

PubMed Abstract | CrossRef Full Text | Google Scholar

Benstetter, G., Biberger, R., and Liu, D. (2009). A review of advanced scanning probe microscope analysis of functional films and semiconductor devices. Thin. Solid Film. 517, 5100–5105. doi: 10.1016/j.tsf.2009.03.176

CrossRef Full Text | Google Scholar

Berggren, K., Xia, Q., Likharev, K. K., Strukov, D. B., Jiang, H., Mikolajick, T., et al. (2020). Roadmap on emerging hardware and technology for machine learning. Nanotechnology 32, 012002. doi: 10.1088/1361-6528/aba70f

PubMed Abstract | CrossRef Full Text | Google Scholar

Bernabei, R., Belli, P., Cappella, F., Caracciolo, V., Castellano, S., Cerulli, R., et al. (2013). Final model independent result of DAMA/LIBRA–phase1. Eur. Phys. J. C 73, 2648. doi: 10.1140/epjc/s10052-013-2648-7

CrossRef Full Text

Bertin, P., and Touati, H. (1994). “Pam programming environments: practice and experience,” in Proceedings of IEEE Workshop on FPGA's for Custom Computing Machines (Napa Valley, CA: IEEE).

Google Scholar

Betoule, M., Kessler, R., Guy, J., Mosher, J., Hardin, D., Biswas, R., et al. (2014). Improved cosmological constraints from a joint analysis of the SDSS-II and SNLS supernova samples. Astron. Astrophys. 568, A22. doi: 10.1051/0004-6361/201423413

CrossRef Full Text | Google Scholar

Bhattacharyya, S. S., Brebner, G., Janneck, J. W., Eker, J., von Platen, C., Mattavelli, M., et al. (2009). Opendf: a dataflow toolset for reconfigurable hardware and multicore systems. SIGARCH Comput. Archit. News. 36, 29–35. doi: 10.1145/1556444.1556449

CrossRef Full Text | Google Scholar

Bi, S., Wang, T., Tobias, N., Nordrum, J., Wang, S., Halvorsen, G., et al. (2018). Auracle: Detecting eating episodes with an ear-mounted sensor. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 1–27. doi: 10.1145/3264902

CrossRef Full Text | Google Scholar

Binnig, G., Quate, C. F., and Gerber, C. (1986). Atomic force microscope. Phys. Rev. Lett. 56, 930–933. doi: 10.1103/PhysRevLett.56.930

PubMed Abstract | CrossRef Full Text | Google Scholar

Biswas, R., Blackburn, L., Cao, J., Essick, R., Hodge, K. A., Katsavounidis, E., et al. (2013). Application of machine learning algorithms to the study of noise artifacts in gravitational-wave data. Phys. Rev. D 88, 062003. doi: 10.1103/PhysRevD.88.062003

CrossRef Full Text | Google Scholar

Blalock, D., Ortiz, J. J. G., Frankle, J., and Guttag, J. (2020). What is the state of neural network pruning? arXiv preprint arXiv:2003.03033.

Google Scholar

Blott, M., Preusser, T., Fraser, N., Gambardella, G., O'Brien, K., and Umuroglu, Y. (2018). FINN-R: an end-to-end deep-learning framework for fast exploration of quantized neural networks. ACM Trans. Reconfigurable Technol. Syst. 11, 1–23. doi: 10.1145/3242897

CrossRef Full Text | Google Scholar

Boehm, C., and Fayet, P. (2004). Scalar dark matter candidates. Nuclear Phys. B 683, 219–263. doi: 10.1016/j.nuclphysb.2004.01.015

CrossRef Full Text | Google Scholar

Bond, B., Hammil, K., Litchev, L., and Singh, S. (2010). “Fpga circuit synthesis of accelerator data-parallel programs,” in 2010 18th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines (Charlotte, NC: IEEE).

Google Scholar

Borodinov, N., Neumayer, S., Kalinin, S. V., Ovchinnikova, O. S., Vasudevan, R. K., and Jesse, S. (2019). Deep neural networks for understanding noisy data applied to physical property extraction in scanning probe microscopy. npj Comput. Mater. 5, 25. doi: 10.1038/s41524-019-0148-5

CrossRef Full Text | Google Scholar

Bosshart, P., Daly, D., Gibb, G., Izzard, M., McKeown, N., Rexford, J., et al. (2014). P4: programming protocol-independent packet processors. SIGCOMM Comput. Commun. Rev. 44, 87–95. doi: 10.1145/2656877.2656890

CrossRef Full Text | Google Scholar

Bridle, J. S. (1989). “Training stochastic model recognition algorithms as networks can lead to maximum mutual information estimation of parameters,” in International Conference on Neural Information Processing Systems (NIPS–89) (Denver, CO), 211–217.

Google Scholar

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., et al. (2020). Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 33, 877–901.

Bruiner, D., Soriano, M. C., Mirasso, C. R., and Fisher, I. (2013). Parallel photonic information processing at gigabyte per second data rates using transient states. Nat. Commun. 4, 1364. doi: 10.1038/ncomms2368

PubMed Abstract | CrossRef Full Text | Google Scholar

Buckley, S., Chiles, J., Mccaughan, A. N., Moody, G., Silverman, K. L., Stevens, M. J., et al. (2017). All-silicon light-emitting diodes waveguide-integrated with superconducting single-photon detectors. Appl. Phys. Lett. 111, 141101. doi: 10.1063/1.4994692

CrossRef Full Text | Google Scholar

Bui, N., Pham, N., Barnitz, J. J., Zou, Z., Nguyen, P., Truong, H., et al. (2019). “ebp: a wearable system for frequent and comfortable blood pressure monitoring from user's ear,” in The 25th Annual International Conference on Mobile Computing and Networking (Los Cabos, BCS), 1–17.

Google Scholar

Buluc, A., and Gilbert, J. R. (2008). “Challenges and advances in parallel sparse matrix-matrix multiplication,” in 2008 37th International Conference on Parallel Processing (Portland, OR: IEEE), 503–510.

Google Scholar

Burr, G., Shelby, R. M., di Nolfo, C., Jang, J. W., Shenoy, R. S., Narayanan, P., et al. (2015). Experimental demonstration and tolerancing of a large-scale neural network (165000 synapses) using phase-change memory as the synaptic weight element. IEEE Trans. Electron. Dev. 62, 3498–3507. doi: 10.1109/TED.2015.2439635

CrossRef Full Text | Google Scholar

Burr, G., Shelby, R. M., Sebastian, A., Kim, S., Kim, S., Sidler, S., et al. (2017). Neuromorphic computing using nonvolatile memory. Adv. Phys. 2, 89–124. doi: 10.1080/23746149.2016.1259585

CrossRef Full Text | Google Scholar

Butler, K. T., Davies, D. W., Cartwright, H., Isayev, O., and Walsh, A. (2018). Machine learning for molecular and materials science. Nature 559, 547–555. doi: 10.1038/s41586-018-0337-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Bychkov, D., Linder, N., and Turkki, R. (2018). Deep learning based tissue analysis predicts outcome in colorectal cancer. Nature 8, 3395. doi: 10.1038/s41598-018-21758-3

PubMed Abstract | CrossRef Full Text

Cai, F., et al. (2019a). A fully integrated reprogrammable memristor–CMOS system for efficient multiply-accumulate operations. Nat. Electron. 2, 290–299. doi: 10.1038/s41928-019-0270-x

CrossRef Full Text | Google Scholar

Cai, F., et al. (2020a). Power-efficient combinatorial optimization using intrinsic noise in memristor hopfield neural networks. Nat. Electron. 3, 409–418. doi: 10.1038/s41928-020-0436-6

CrossRef Full Text | Google Scholar

Cai, H., Gan, C., Wang, T., Zhang, Z., and Han, S. (2019b). Once-for-all: Train one network and specialize it for efficient deployment. arXiv preprint arXiv:1908.09791.

Google Scholar

Cai, H., Zhu, L., and Han, S. (2018). Proxylessnas: Direct neural architecture search on target task and hardware. arXiv preprint arXiv:1812.00332.

Google Scholar

Cai, Y., Yao, Z., Dong, Z., Gholami, A., Mahoney, M. W., and Keutzer, K. (2020b). “Zeroq: a novel zero shot quantization framework,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13169–13178.

Google Scholar

Cai, Z., He, X., Sun, J., and Vasconcelos, N. (2017). “Deep learning with low precision by half-wave gaussian quantization,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Honolulu, HI), 5918–5926.

Google Scholar

Calabrese, F. D., Wang, L., Ghadimi, E., Peters, G., Hanzo, L., and Soldati, P. (2018). Learning radio resource management in RANs: framework, opportunities, and challenges. IEEE Commun. Mag. 56, 138–145. doi: 10.1109/MCOM.2018.1701031

CrossRef Full Text | Google Scholar

Calafiura, P., Farrell, S., Gray, H., Vlimant, J., Innocente, V., Salzburger, A., et al. (2018). “Trackml: a high energy physics particle tracking challenge,” in 2018 IEEE 14th International Conference on e-Science (e-Science) (Amsterdam: IEEE), 344–344.

Google Scholar

Caldeira, P., Penha, J. C., Bragana, L., Ferreira, R., Nacif, J. A. M., Ferreira, R., et al. (2018). “From java to fpga: an experience with the intel harp system,” in 2018 30th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD) (Lyon).

Google Scholar

Carleo, G., Cirac, I., Cranmer, K., Daudet, L., Schuld, M., Tishby, N., et al. (2019). Machine learning and the physical sciences. Rev. Mod. Phys. 91, 045002. doi: 10.1103/RevModPhys.91.045002

CrossRef Full Text | Google Scholar

Cascadelake (2019). Intel cascade lake.

Casola, F., van der Sar, T., and Yacoby, A. (2018). Probing condensed matter physics with magnetometry based on nitrogen-vacancy centres in diamond. Nat. Rev. Mater. 3, 17088. doi: 10.1038/natrevmats.2017.88

PubMed Abstract | CrossRef Full Text | Google Scholar

CCIX (2020). Ccix consortium.

Cerebras (2019). Cerebras.

CERN (2020). The Phase-2 Upgrade of the CMS Level-1 Trigger. Technical Report CERN-LHCC-2020-004. CMS-TDR-021, CERN, Geneva. Final version.

Google Scholar

Chakraborty, S., Tomsett, R., Raghavendra, R., Harborne, D., Alzantot, M., Cerutti, F., et al. (2017). “Interpretability of deep learning models: a survey of results,” in 2017 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computed, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI) (San Francisco, CA: IEEE), 1–6.

Google Scholar

Challita, U., Dong, L., and Saad, W. (2018). Proactive resource management for LTE in unlicensed spectrum: a deep learning perspective. IEEE Trans. Wireless Commun. 17, 4674–4689. doi: 10.1109/TWC.2018.2829773

CrossRef Full Text | Google Scholar

Champion, K., Lusch, B., Kutz, J. N., and Brunton, S. L. (2019). Data-driven discovery of coordinates and governing equations. Proc. Natl. Acad. Sci. U.S.A. 116, 22445–22451. doi: 10.1073/pnas.1906995116

PubMed Abstract | CrossRef Full Text | Google Scholar

Chan, M. L., Heng, I. S., and Messenger, C. (2020). Detection and classification of supernova gravitational wave signals: a deep learning approach. Phys. Rev. D 102, 043022. doi: 10.1103/PhysRevD.102.043022

CrossRef Full Text | Google Scholar

Chang, Y.-W., and Sheu, T. W. H. (2020). GPU acceleration of a patient-specific airway image segmentation and its assessment.

Google Scholar

Chatrchyan, S., Khachatryan, V., Sirunyan, A. M., Tumasyan, A., Adam, W., Aguilo, E., et al. (2012). Observation of a new boson at a mass of 125GeV with the CMS experiment at the LHC. Phys. Lett. B 716, 30–61. doi: 10.1016/j.physletb.2012.08.021

CrossRef Full Text | Google Scholar

Chauvin, Y. (1989). “A back propagation network with optimal use of hidden units,” in Advances in Neural Information Processing (Denver, CO).

PubMed Abstract | Google Scholar

Chawla, R., Bandyopadhyay, A., Srinivasan, V., and Hasler, P. (2004). “A 531 nW/MHz, 128 × 32 current-mode programmable analog vector-matrix multiplier with over two decades of linearity,” in IEEE Custom Integrated Circuits Conference (CICC'04) (Orlando, FL), 651–654.

Google Scholar

Chen, C., Cerri, O., Nguyen, T. Q., Vlimant, J.-R., and Pierini, M. (2020). Data augmentation at the LHC through analysis-specific fast simulation with deep learning.

Google Scholar

Chen, J., and Ran, X. (2019). Deep learning with edge computing: areview. Proc. IEEE 107, 1655–1674. doi: 10.1109/JPROC.2019.2921977

CrossRef Full Text | Google Scholar

Chen, M., Poor, H. V., Saad, W., and Cui, S. (2021). Convergence time optimization for federated learning over wireless networks. IEEE Trans. Wireless Commun. 20, 2457–2471. doi: 10.1109/TWC.2020.3042530

CrossRef Full Text | Google Scholar

Chen, M., Yang, J., Zhou, J., Hao, Y., Zhang, J., and Youn, C. (2018). 5G-Smart diabetes: Toward personalized diabetes diagnosis with healthcare big data clouds. IEEE Commun. Mag. 56, 16–23. doi: 10.1109/MCOM.2018.1700788

CrossRef Full Text | Google Scholar

Chen, P.-H. C., Gadepalli, K., MacDonald, R., Liu, Y., Kadowaki, S., Nagpal, K., et al. (2019). An augmented reality microscope with real-time artificial intelligence integration for cancer diagnosis. Nat. Med. 25, 1453–1457. doi: 10.1038/s41591-019-0539-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, T., and Guestrin, C. (2016). “Xgboost,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco, CA).

Google Scholar

Chen, Y.-H., Krishna, T., Emer, J., and Sze, V. (2017). Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J. Solid State Circ. 52, 127–138. doi: 10.1109/JSSC.2016.2616357

CrossRef Full Text | Google Scholar

Chi, Y., Guo, L., Lau, J., Choi, Y.-K., Wang, J., and Cong, J. (2021). Extending high-level synthesis for task-parallel programs. Proc. Annu. IEEE Symp. Field Program Cust. Comput. Mach. 2021:10.1109/fccm51124.2021.00032. doi: 10.1109/fccm51124.2021.00032

PubMed Abstract | CrossRef Full Text | Google Scholar

Chin, T.-W., Chuang, P. I.-J., Chandra, V., and Marculescu, D. (2020). One weight bitwidth to rule them all. arXiv preprint arXiv:2008.09916. doi: 10.1007/978-3-030-68238-5_7

CrossRef Full Text | Google Scholar

Choi, B., Jo, K., Choi, S., and Choi, J. (2017a). “Surgical-tools detection based on convolutional neural network in laparoscopic robot-assisted surgery,” in 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (Jeju: IEEE), 1756–1759.

PubMed Abstract | Google Scholar

Choi, J., Vijayaraghavan, M., Sherman, B., and Chlipala, A. Arvind (2017b). Kami: a platform for high-level parametric hardware specification and its modular verification. Proc. ACM Program. Lang. 1, 1–30. doi: 10.1145/3110268

CrossRef Full Text | Google Scholar

Choi, J., Wang, Z., Venkataramani, S., Chuang, P. I.-J., Srinivasan, V., and Gopalakrishnan, K. (2018). Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085.

Google Scholar

Choi, Y.-K., Cong, J., Fang, Z., Hao, Y., Reinman, G., and Wei, P. (2019). In-depth analysis on microarchitectures of modern heterogeneous cpu-fpga platforms. ACM Trans. Reconfigurable Technol. Syst. 12, 1–20. doi: 10.1145/3294054

CrossRef Full Text | Google Scholar

Choma, N., Murnane, D., Ju, X., Calafiura, P., Conlon, S., Farrell, S., et al. (2020). Track seeding and labelling with embedded-space graph neural networks.

Google Scholar

Choukroun, Y., Kravchik, E., Yang, F., and Kisilev, P. (2019). “Low-bit quantization of neural networks for efficient inference,” in ICCV Workshops (Seoul), 3009–3018.

PubMed Abstract | Google Scholar

Chrisey, D. B., and Hubler, G. K. (1994). Pulsed Laser Deposition of Thin Films. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms.

Google Scholar

Christiansen, E. M., Yang, S. J., Ando, D. M., Javaherian, A., Skibinski, G., Lipnick, S., et al. (2018). In silico labeling: Predicting fluorescent labels in unlabeled images. Cell. 173, 792-803.e19. doi: 10.1016/j.cell.2018.03.040

PubMed Abstract | CrossRef Full Text | Google Scholar

Chu, M., Kim, B., Park, S., Hwang, H., Jeon, M., Lee, B. H., et al. (2014). Neuromorphic hardware system for visual pattern recognition with memristor array and CMOS neuron. IEEE Trans. Ind. Electron. 62, 2410–2419. doi: 10.1109/TIE.2014.2356439

CrossRef Full Text | Google Scholar

Chua, A. J. K., and Vallisneri, M. (2020). Learning bayesian posteriors with neural networks for gravitational-wave inference. Phys. Rev. Lett. 124, 041102. doi: 10.1103/PhysRevLett.124.041102

PubMed Abstract | CrossRef Full Text | Google Scholar

Chugh, N., Vasista, V., Purini, S., and Bondhugula, U. (2016). “A dsl compiler for accelerating image processing pipelines on fpgas,” in 2016 International Conference on Parallel Architecture and Compilation Techniques (PACT) (Haifa).

Google Scholar

Chun, K. S., Bhattacharya, S., Dolbear, C., Kashanchi, J., and Thomaz, E. (2020). “Intraoral temperature and inertial sensing in automated dietary assessment: a feasibility study,” in Proceedings of the 2020 International Symposium on Wearable Computers, 27–31.

Google Scholar

Chung, E., Fowers, J., Ovtcharov, K., Papamichael, M., Caulfield, A., Massengill, T., et al. (2018). Serving dnns in real time at datacenter scale with project brainwave. IEEE Micro 38, 8–20. doi: 10.1109/MM.2018.022071131

CrossRef Full Text | Google Scholar

Chung, E. S., Davis, J. D., and Lee, J. (2013). “Linqits: big data on little clients,” in Proceedings of the 40th Annual International Symposium on Computer Architecture (Tel-Aviv: Association for Computing Machinery).

Google Scholar

Cireşan, D. C., Meier, U., Gambardella, L. M., and Schmidhuber, J. (2010). Deep, big, simple neural nets for handwritten digit recognition. Neural Comput. 22, 3207. doi: 10.1162/NECO_a_00052

PubMed Abstract | CrossRef Full Text | Google Scholar

Cleland, W. E., and Stern, E. G. (1994). Signal processing considerations for liquid ionization calorimeters in a high rate environment. Nucl. Instrum. Meth. A 338, 467–497. doi: 10.1016/0168-9002(94)91332-3

CrossRef Full Text | Google Scholar

Clow, J., Tzimpragos, G., Dangwal, D., Guo, S., McMahan, J., and Sherwood, T. (2017). “A pythonic approach for rapid hardware prototyping and instrumentation,” in 2017 27th International Conference on Field Programmable Logic and Applications (FPL) (Ghent), 1–7.

Google Scholar

CMS Collaboration (2017). The Phase-2 Upgrade of the CMS Endcap Calorimeter.

Coelho, C. (2019). QKeras.

Coelho, C. N., Kuusela, A., Li, S., Zhuang, H., Ngadiuba, J., Aarrestad, T. K., et al. (2021). Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors. Nat. Mach. Intell. 3, 675–686.

Google Scholar

Coelho, C. N., Kuusela, A., Zhuang, H., Aarrestad, T., Loncar, V., Ngadiuba, J., et al. (2020). Automatic deep heterogeneous quantization of deep neural networks for ultra low-area, low-latency inference on the edge at particle colliders. Nat. Mach. Intell. doi: 10.1038/s42256-021-00356-5

CrossRef Full Text

Collett, T. E., and Auger, M. W. (2014). Cosmological constraints from the double source plane lens SDSSJ0946+1006. Mon. Not. R. Astron. Soc. 443, 969–976. doi: 10.1093/mnras/stu1190

CrossRef Full Text | Google Scholar

Collins, L., Somnath, S., Kalinin, S. V., and Belianinov, A. (2020a). “Scanning probe microscopy in the information age,” in Handbook on Big Data and Machine Learning in the Physical Sciences, World Scientific Series on Emerging Technologies (World Scientific), 231–282.

Google Scholar

Collins, L., Vasudevan, R. K., and Sehirlioglu, A. (2020b). Visualizing charge transport and nanoscale electrochemistry by hyperspectral kelvin probe force microscopy. ACS Appl. Mater. Interfaces 12, 33361–33369. doi: 10.1021/acsami.0c06426

PubMed Abstract | CrossRef Full Text | Google Scholar

Cooks, R. G., and Yan, X. (2018). Mass spectrometry for synthesis and analysis. Annu. Rev. Anal. Chem. 11, 1–28. doi: 10.1146/annurev-anchem-061417-125820

PubMed Abstract | CrossRef Full Text | Google Scholar

Courbariaux, M., Bengio, Y., and David, J.-P. (2015). “BinaryConnect: training deep neural networks with binary weights during propagations,” in Advances in Neural Information Processing Systems (Montréal, QC), 3123–3131.

Google Scholar

Crowley, E. J., Gray, G., and Storkey, A. J. (2018). “Moonshine: distilling with cheap convolutions,” in NeurIPS (Montréal, QC), 2893–2903.

Google Scholar

Cuoco, E., Powell, J., Cavaglià, M., Ackley, K., Bejger, M., Chatterjee, C., et al. (2020). Enhancing gravitational-wave science with machine learning. Mach. Learn. Sci. Technol. 2, 011002. doi: 10.1088/2632-2153/abb93a

CrossRef Full Text | Google Scholar

Dauchot, J. P., Edart, S., Wautelet, M., and Hecq, M. (1995). Synthesis of zirconium nitride films monitored by in situ soft x-ray spectrometry. Vacuum 46, 927–930. doi: 10.1016/0042-207X(95)00074-7

CrossRef Full Text | Google Scholar

de Dinechin, F., Klein, C., and Pasca, B. (2009). “Generating high-performance custom floating-point pipelines,” in 2009 International Conference on Field Programmable Logic and Applications (Prague).

Google Scholar

de Oliveira, L., Paganini, M., and Nachman, B. (2017). Learning particle physics by example: location-aware generative adversarial networks for physics synthesis. Comput. Softw. Big Sci. 1, 4. doi: 10.1007/s41781-017-0004-6

CrossRef Full Text | Google Scholar

de Silva, B. M., Champion, K., Quade, M., and others (2020). PySINDy: a python package for the sparse identification of nonlinear dynamics from data. arXiv preprint arXiv. doi: 10.21105/joss.02104

CrossRef Full Text | Google Scholar

Del Sozzo, E., Baghdadi, R., Amarasinghe, S., and Santambrogio, M. D. (2017). “A common backend for hardware acceleration on fpga,” in 2017 IEEE International Conference on Computer Design (ICCD) (Boston, MA: IEEE).

Google Scholar

Delorimier, M., Kapre, N., Mehta, N., and Dehon, A. (2011). Spatial hardware implementation for sparse graph algorithms in graphstep. ACM Trans. Auton. Adapt. Syst. 6, 1–20. doi: 10.1145/2019583.2019584

CrossRef Full Text | Google Scholar

Delubac, T., Bautista, J. E., Busca, N. G., Rich, J., Kirkby, D., Bailey, S., et al. (2015). Baryon acoustic oscillations in the Lyαforest of BOSS DR11 quasars. Astron. Astrophys. 574, A59. doi: 10.1051/0004-6361/201423969

CrossRef Full Text | Google Scholar

DeZoort, G., Thais, S., Ojalvo, I., Elmer, P., Razavimaleki, V., Duarte, J., et al. (2021). Charged particle tracking via edge-classifying interaction networks. Comput. Softw. Big Sci. 5, 26. doi: 10.1007/s41781-021-00073-z

CrossRef Full Text | Google Scholar

Di Sipio, R., Faucci Giannelli, M., Ketabchi Haghighat, S., and Palazzo, S. (2019). DijetGAN: A generative-adversarial network approach for the simulation of QCD dijet events at the LHC. JHEP 08, 110. doi: 10.1007/JHEP08(2019)110

CrossRef Full Text | Google Scholar

Diorio, C., Hasler, P., Minch, A., and Mead, C. A. (1996). A single-transistor silicon synapse. IEEE Trans. Electron. Dev. 43, 1972–1980. doi: 10.1109/16.543035

CrossRef Full Text | Google Scholar

DOE (2020). Data, artificial intelligence, and machine learning at DOE scientific user facilities.

Google Scholar

Dominguez Sanchez, H., Huertas-Company, M., Bernardi, M., Tuccillo, D., and Fischer, J. L. (2018). Improving galaxy morphologies for SDSS with deep learning. Mon. Not. R. Astron. Soc. 476, 3661–3676. doi: 10.1093/mnras/sty338

CrossRef Full Text | Google Scholar

Dong, X., Chen, S., and Pan, S. J. (2017). Learning to prune deep neural networks via layer-wise optimal brain surgeon. arXiv preprint arXiv:1705.07565.

Google Scholar

Dong, Y., Hoover, A., Scisco, J., and Muth, E. (2012). A new method for measuring meal intake in humans via automated wrist motion tracking. Appl. Psychophysiol. Biofeedback 37, 205–215. doi: 10.1007/s10484-012-9194-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Dong, Z., Gao, Y., Huang, Q., Wawrzynek, J., So, H. K., and Keutzer, K. (2021). “Hao: hardware-aware neural architecture optimization for efficient inference,” in 2021 IEEE 29th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM) (Orlando, FL: IEEE), 50–59.

Google Scholar

Dong, Z., Yao, Z., Cai, Y., Arfeen, D., Gholami, A., Mahoney, M. W., et al. (2020). “HAWQ-V2: hessian aware trace-weighted quantization of neural networks,” in Advances in Neural Information Processing Systems, 33.

Google Scholar

Dong, Z., Yao, Z., Gholami, A., Mahoney, M. W., and Keutzer, K. (2019). “Hawq: hessian aware quantization of neural networks with mixed-precision,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (Seoul: IEEE), 293–302.

Google Scholar

Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., et al. (2020). An image is worth 16 × 16 words: Transformers for image recognition at scale. ArXive Preprint. arXive:2010.11929.

Google Scholar

Dreissigacker, C., Sharma, R., Messenger, C., Zhao, R., and Prix, R. (2019). Deep-learning continuous gravitational waves. Phys. Rev. D 100, 044009. doi: 10.1103/PhysRevD.100.044009

CrossRef Full Text | Google Scholar

Drielsma, F., Terao, K., Dominé, L., and Koh, D. H. (2021). “Scalable, end-to-end, deep-learning-based data reconstruction chain for particle imaging detectors,” in 34th Conference on Neural Information Processing Systems.

Google Scholar

Duarte, J., Han, S., Harris, P., Jindariani, S., Kreinar, E., Kreis, B., et al. (2018). Fast inference of deep neural networks in FPGAs for particle physics. J. Instrum. 13, P07027. doi: 10.1088/1748-0221/13/07/P07027

PubMed Abstract | CrossRef Full Text | Google Scholar

Duarte, J., Harris, P., Hauck, S., Holzman, B., Hsu, S.-C., Jindariani, S., et al. (2019). FPGA-accelerated machine learning inference as a service for particle physics computing. Comput. Softw. Big Sci. 3, 13. doi: 10.1007/s41781-019-0027-2

CrossRef Full Text

Duarte, J., and Vlimant, J.-R. (2020). Graph Neural Networks for Particle Tracking and Reconstruction. Artificial Intelligence for High Energy Physics.

PubMed Abstract | Google Scholar

Durant, B. L., Giroux, O., Harris, M., and Stam, N. (2017). Inside Volta: The World's Most Advanced Data Center GPU. NVidia Developer Blog.

Durst, D., Feldman, M., Huff, D., Akeley, D., Daly, R., Bernstein, G. L., et al. (2020). “Type-directed scheduling of streaming accelerators,” in Proceedings of the 41st ACM SIGPLAN International Conference on Programming Language Design and Implementation (PLDI–20), June 15s20, 2020, London, UK (New York, NY: ACM).

Google Scholar

Echterhoff, J. M., and Wang, E. J. (2020). Par: Personal activity radius camera view for contextual sensing. arXiv preprint arXiv:2008.07204.

Google Scholar

Eckert, M., Meyer, D., Haase, J., and Klauer, B. (2016). Operating system concepts for reconfigurable computing: review and survey. Int. J. Reconfigurable Comput. 2016, 1–11. doi: 10.1155/2016/2478907

CrossRef Full Text | Google Scholar

Egelhoff, W. F. Jr., and Jacob, I. I. (1989). Reflection high-energy electron diffraction (RHEED) oscillations at 77 K. Phys. Rev. Lett. 62, 921–924. doi: 10.1103/PhysRevLett.62.921

PubMed Abstract | CrossRef Full Text | Google Scholar

Eisenstein, D. J., Zehavi, I., Hogg, D. W., Scoccimarro, R., Blanton, M. R., Nichol, R. C., et al. (2005). Detection of the baryon acoustic peak in the large-scale correlation function of SDSS luminous red galaxies. Astrophys. J. 633, 560–574. doi: 10.1086/466512

CrossRef Full Text | Google Scholar

Enno, L., Liu, S., and Chu, M. (2020). White Paper: Simplify Software Integration for Fpga Accelerators With OPAE. Intel Corporation.

Google Scholar

Epyc (2019). Amd launches Epyc rome, first 7nm cpu.

Esmaeilzadeh, H., Blem, E., Amant, R. S., Sankaralingam, K., and Burger, D. (2011). “Dark silicon and the end of multicore scaling,” in 2011 38th Annual International Symposium on Computer Architecture (ISCA) (San Jose, CA: IEEE), 365–376.

Google Scholar

Exxactcorp (2017). Taking a Deeper Look at the Amd Radeon Instinct Gpus for Deep Learning. Exxact Corporation Blog.

Fang, J., Shafiee, A., Abdel-Aziz, H., Thorsley, D., Georgiadis, G., and Hassoun, J. (2020a). Near-lossless post-training quantization of deep neural networks via a piecewise linear approximation. arXiv preprint arXiv:2002.00104. doi: 10.1007/978-3-030-58536-5_5

CrossRef Full Text

Fang, J., Shafiee, A., Abdel-Aziz, H., Thorsley, D., Georgiadis, G., and Hassoun, J. H. (2020b). “Post-training piecewise linear quantization for deep neural networks,” in European Conference on Computer Vision (Springer), 69–86.

Google Scholar

Farrell, S., Calafiura, P., Mudigonda, M., Prabhat, Anderson D., Vlimant, J.-R., Zheng, S., et al. (2018). Novel Deep Learning Methods for Track Reconstruction. Seattle, WA: 4th International Workshop Connecting The Dots 2018.

PubMed Abstract | Google Scholar

Feldmann, J., Youngblood, N., Wright, C. D., Bhaskaran, H., and Pernice, W. H. P. (2019). All-optical spiking neurosynaptic networks with self-learning capabilities. Nature 569, 208–214. doi: 10.1038/s41586-019-1157-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Fermilab (2021). About Fermilab. Available online at: https://www.fnal.gov/pub/about/index.html.

Fermilab (2021). DOE CD-0 Mission Need for Accelerator Controls Operations Research Network (ACORN). Available online at: https://acorn-docdb.fnal.gov/cgi-bin/sso/ShowDocument?docid=10.

Fingeroff, M. (2010). High-Level Synthesis Blue Book. Xlibris US.

Google Scholar

Fixsen, D. J., Cheng, E. S., Gales, J. M., Mather, J. C., Shafer, R. A., and Wright, E. L. (1996). The cosmic microwave background spectrum from the full COBE FIRAS data set. Astrophys J. 473, 576. doi: 10.1086/178173

CrossRef Full Text | Google Scholar

Fleming, B. (2012). The MicroBooNE Technical Design Report. doi: 10.2172/1333130

CrossRef Full Text | Google Scholar

Förster, F., Cabrera-Vives, G., Castillo-Navarrete, E., Estévez, P. A., Sánchez-Sáez, P., Arredondo, J., et al. (2020). The Automatic Learning for the Rapid Classification of Events (ALeRCE) Alert Broker. arXiv e-prints, arXiv:2008.03303. doi: 10.3847/1538-3881/abe9bc

CrossRef Full Text | Google Scholar

Franklin, D. (2017). Nvidia Jetson tx2 Delivers Twice the Intelligence to the Edge. NVidia Developer Blog.

Fraser, N. J., Umuroglu, Y., Gambardella, G., Blott, M., Leong, P., Jahre, M., et al. (2017). “Scaling binarized neural networks on reconfigurable logic,” in PARMA DITAM 2017 (Stockholm), 25–30.

Google Scholar

Fritzsche, N. (2020). Development of Digital Signal Processing for the ATLAS LAr Calorimeters with Artificial Neural Networks using FPGAs (Master's thesis). TU Dresden.

Google Scholar

Fuller, E. J., Keene, S. T., Melianas, A., Wang, Z., Agarwal, S., Li, Y., et al. (2019). Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing. Science 364, 570–574. doi: 10.1126/science.aaw5581

PubMed Abstract | CrossRef Full Text | Google Scholar

Fumero, J., Papadimitriou, M., Zakkak, F. S., Xekalaki, M., Clarkson, J., and Kotselidis, C. (2019). “Dynamic application reconfiguration on heterogeneous hardware,” in Proceedings of the 15th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments (Providence, RI: VEE).

Google Scholar

Gabbard, H., Messenger, C., Heng, I. S., Tonolini, F., and Murray-Smith, R. (2020). Bayesian parameter estimation using conditional variational autoencoders for gravitational-wave astronomy. Nat. Phys. 17, 112–117. doi: 10.1038/s41567-021-01425-7

CrossRef Full Text | Google Scholar

Gabbard, H., Williams, M., Hayes, F., and Messenger, C. (2018). Matching matched filtering with deep networks for gravitational-wave astronomy. Phys. Rev. Lett. 120, 141103. doi: 10.1103/PhysRevLett.120.141103

PubMed Abstract | CrossRef Full Text | Google Scholar

Gale, T., Elsen, E., and Hooker, S. (2019). The state of sparsity in deep neural networks. arXiv preprint arXiv:1902.09574.

Google Scholar

Gebhard, T. D., Kilbertus, N., Harry, I., and Schlkopf, B. (2019). Convolutional neural networks: a magic bullet for gravitational-wave detection? Phys. Rev. D 100, 063015. doi: 10.1103/PhysRevD.100.063015

CrossRef Full Text | Google Scholar

George, D., and Huerta, E. (2018). Deep learning for real-time gravitational wave detection and parameter estimation: results with advanced ligo data. Phys. Lett. B 778, 64–70. doi: 10.1016/j.physletb.2017.12.053

CrossRef Full Text | Google Scholar

George, S., Kim, S., Shah, S., Hasler, J., Collins, M., Adil, F., et al. (2016). A programmable and configurable mixed-mode FPAA SoC. IEEE Trans. Very Large Scale Integr. Syst. 24, 2253–2261. doi: 10.1109/TVLSI.2015.2504119

CrossRef Full Text | Google Scholar

George, S. M. (2010). Atomic layer deposition: an overview. Chem. Rev. 110, 111–131. doi: 10.1021/cr900056b

PubMed Abstract | CrossRef Full Text | Google Scholar

Gerstner, W., and Kistler, W. (2002). Cambridge University Press, Cambridge, MA.

Gholami, A., Kim, S., Dong, Z., Yao, Z., Mahoney, M. W., and Keutzer, K. (2021). A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630. doi: 10.1201/9781003162810-13

CrossRef Full Text | Google Scholar

Gholami, A., Kwon, K., Wu, B., Tai, Z., Yue, X., Jin, P., et al. (2018). “SqueezeNext: Hardware-aware neural network design,” in Workshop Paper in CVPR (Salt Lake City, UT).

Google Scholar

Ginsburg, B., Castonguay, P., Hrinchuk, O., Kuchaiev, O., Lavrukhin, V., Leary, R., et al. (2020). Stochastic Gradient Methods With Layer-Wise Adaptive Moments for Training of Deep Networks.

Google Scholar

Gligorov, V. V., and Williams, M. (2013). Efficient, reliable and fast high-level triggering using a bonsai boosted decision tree. JINST 8, P02013. doi: 10.1088/1748-0221/8/02/P02013

CrossRef Full Text | Google Scholar

Goi, E., Zhang, Q., Chen, X., Luan, H., and Gu, M. (2020). Perspective on photonic memristive neuromorphic computing. PhotoniX 1, 3. doi: 10.1186/s43074-020-0001-6

CrossRef Full Text | Google Scholar

Gómez-Navarro, C., De Pablo, P. J., Gómez-Herrero, J., Biel, B., Garcia-Vidal, F. J., Rubio, A., et al. (2005). Tuning the conductance of single-walled carbon nanotubes by ion irradiation in the anderson localization regime. Nat. Mater. 4, 534–539. doi: 10.1038/nmat1414

PubMed Abstract | CrossRef Full Text | Google Scholar

Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep Learning. Cambridge, MA: The MIT Press.

Google Scholar

Govoreanu, B., Redolfi, A., Zhang, L., Adelmann, C., Popovici, M., Clima, S., et al. (2013). “Vacancy-modulated conductive oxide resistive RAM (VMCO-RRAM),” in IEEE International Electron Device Meeting (IEDM'13) (Washington, DC: IEEE), 10.2.1–10.2.4.

Google Scholar

Graham, M. J., Ford, K. E. S., McKernan, B., Ross, N. P., Stern, D., Burdge, K., et al. (2020). Candidate electromagnetic counterpart to the binary black hole merger gravitational-wave event s190521g. Phys. Rev. Lett. 124, 251102. doi: 10.1103/PhysRevLett.124.251102

PubMed Abstract | CrossRef Full Text | Google Scholar

Gray, L., Klijnsma, T., and Ghosh, S. (2020). A Dynamic Reduction Network for Point Clouds.

PubMed Abstract | Google Scholar

Griffin, L. A., Gaponenko, I., and Bassiri-Gharb, N. (2020). Better, faster, and less biased machine learning: Electromechanical switching in ferroelectric thin films. Adv. Mater. 32, e2002425. doi: 10.1002/adma.202002425

PubMed Abstract | CrossRef Full Text | Google Scholar

Grollier, J., Querlioz, D., Camsari, K., Everschor-Sitte, K., Fukami, S., and Stiles, M. (2020). Neuromorphic spintronics. Nat. Electron. 3, 360–370. doi: 10.1038/s41928-019-0360-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Guccione, S., Levi, D., and Sundararajan, P. (2000). Jbits: Java based interface for reconfigurable computing.

Google Scholar

Guo, X., Bayat, F. M., Bavandpour, M., Klachko, M., Mahmoodi, M., Prezioso, M., et al. (2017a). “Fast, energy-efficient, robust, and reproducible mixed-signal neuromorphic classifier based on embedded NOR flash memory technology,” in IEEE International Electron Device Meeting (IEDM'17) (San Francisco, CA: IEEE), 6.5.1–6.5.4.

Google Scholar

Guo, X., Bayat, F. M., Prezioso, M., Chen, Y., Nguyen, B., Do, N., et al. (2017b). “Temperature-insensitive analog vector-by-matrix multiplier based on 55 nm NOR flash memory cells,” in IEEE Custom Integrated Circuits Conference (CICC'17) (Austin, TX: IEEE), 1–4.

Google Scholar

Gupta, V., Koren, T., and Singer, Y. (2018). Shampoo: Preconditioned stochastic tensor optimization. arXiv preprint arXiv:1802.09568.

Google Scholar

Gysel, P., Pimentel, J., Motamedi, M., and Ghiasi, S. (2018). Ristretto: A framework for empirical study of resource-efficient inference in convolutional neural networks. IEEE Trans. Neural Netw. Learn. Syst. 29, 5784–5789. doi: 10.1109/TNNLS.2018.2808319

PubMed Abstract | CrossRef Full Text | Google Scholar

Hamerly, R., Bernstein, L., Sludds, A., Soljacic, M., and Englund, D. (2019). Large-scale optical neural networks based on photoelectric multiplication. Phys. Rev. X 9, 021032. doi: 10.1103/PhysRevX.9.021032

CrossRef Full Text | Google Scholar

Hamley, R., Sludds, A., Bernstein, L., Prabhu, M., Roques-Carmes, C., Carolan, J., et al. (2019). “Towards large-scale photonics neural-network accelerators,” in IEEE International Electron Device Meeting (IEDM'19) (San Francisco, CA: IEEE), 22.8.1–22.8.4.

Google Scholar

Han, S., Kang, J., Mao, H., Hu, Y., Li, X., Li, Y., et al. (2017). “Ese: efficient speech recognition engine with sparse lstm on fpga,” in FPGA 2017 (New York, NY: ACM), 75–84.

Google Scholar

Han, S., Mao, H., and Dally, W. J. (2016). “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” in International Conference on Learning Representations (San Juan).

Google Scholar

Hansen, T. N., Schou, J., and Lunney, J. G. (1999). Langmuir probe study of plasma expansion in pulsed laser ablation. Appl. Phys. A: Mater. Sci. Process. 69, S601–S604. doi: 10.1007/s003390051485

CrossRef Full Text | Google Scholar

Hanson, S., and Pratt, L. (1988). Comparing biases for minimal network construction with back-propagation. Adv. Neural Inf. Process. Syst. 1, 177–185.

Google Scholar

Hardawar, D. (2018). Amd's Radeon Vega Gpu is Headed Everywhere, Even to Machine Learning. Engadget.

Haroush, M., Hubara, I., Hoffer, E., and Soudry, D. (2020). “The knowledge within: Methods for data-free model compression,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8494–8502.

Google Scholar

Harry, G. M., and LIGO Scientific Collaboration (2010). Advanced LIGO: the next generation of gravitational wave detectors. Class. Quantum Grav. 27, 084006. doi: 10.1088/0264-9381/27/8/084006

CrossRef Full Text | Google Scholar

Hart, T., Tong, A. H. Y., Chan, K., Van Leeuwen, J., Seetharaman, A., Aregger, M., et al. (2017). Evaluation and Design of Genome-Wide CRISPR/SpCas9 Knockout Screens. G3 7, 271. doi: 10.1534/g3.117.041277

PubMed Abstract | CrossRef Full Text

Hashemi, B., Amin, N., Datta, K., Olivito, D., and Pierini, M. (2019). LHC Analysis-Specific Datasets With Generative Adversarial Networks.

Google Scholar

Hasler, J., and Marr, H. (2013). Finding a roadmap to achieve large neuromorphic hardware systems. Front. Neurosci. 7, 118. doi: 10.3389/fnins.2013.00118

PubMed Abstract | CrossRef Full Text | Google Scholar

Hassibi, B., and Stork, D. G. (1993). Second Order Derivatives for Network Pruning: Optimal Brain Surgeon. Burlington: Morgan Kaufmann.

Google Scholar

Hassibi, B., Stork, D. G., and Wolff, G. J. (1993). “Optimal brain surgeon and general network pruning,” in IEEE International Conference on Neural Networks (San Francisco, CA: IEEE), 293–299.

Google Scholar

Hawks, B., Duarte, J., Fraser, N. J., Pappalardo, A., Tran, N., and Umuroglu, Y. (2021). Ps and qs: Quantization-aware pruning for efficient low latency neural network inference.

PubMed Abstract | Google Scholar

He, K., Zhang, X., Ren, S., and Sun, J. (2016). “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Las Vegas, NV), 770–778.

PubMed Abstract | Google Scholar

He, Y., Lin, J., Liu, Z., Wang, H., Li, L.-J., and Han, S. (2018). Amc: Automl for model compression and acceleration on mobile devices. In Proceedings of the European Conference on Computer Vision (ECCV), pages 784–800.

Google Scholar

He, Y., Liu, P., Wang, Z., Hu, Z., and Yang, Y. (2019). “Filter pruning via geometric median for deep convolutional neural networks acceleration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Long Beach, CA), 4340–4349.

Google Scholar

He, Y., Zhang, X., and Sun, J. (2017). “Channel pruning for accelerating very deep neural networks,” in Proceedings of the IEEE International Conference on Computer Vision (Venice), 1389–1397.

Google Scholar

Hegarty, J., Brunhaver, J., DeVito, Z., Ragan-Kelley, J., Cohen, N., Bell, S., et al. (2014). Darkroom: compiling high-level image processing code into hardware pipelines. ACM Trans. Graph. 33, 1–11. doi: 10.1145/2601097.2601174

CrossRef Full Text | Google Scholar

Heintz, A., Razavimaleki, V., Duarte, J., DeZoort, G., Ojalvo, I., Thais, S., et al. (2020). Accelerated Charged Particle Tracking With Graph Neural Networks on Fpgas. 34th Conference on Neural Information Processing Systems.

Google Scholar

Hertz, J., Krogh, A., and Palmer, R. G. (1991). Perseus, Cambridge, MA: Cambridge University.

Herwig, C., Carloni, L., Guglielmo, G. D., Fahim, F., Hawks, B., Hirschauer, J. F., et al. (2020). “Design of a reconfigurable autoencoder algorithm for detector front-end ASICs,” in IEEE Nuclear Science Symposium and Medical Imaging Conference.

Google Scholar

Heymans, C., Tröster, T., Asgari, M., Blake, C., Hildebrandt, H., Joachimi, B., et al. (2020). KiDS-1000 Cosmology: Multi-probe weak gravitational lensing and spectroscopic galaxy clustering constraints. arXiv e-prints, arXiv:2007.15632. doi: 10.1051/0004-6361/202039063

CrossRef Full Text | Google Scholar

Hinton, G. (2009). Deep belief networks. Scholarpedia 4, 5947. doi: 10.4249/scholarpedia.5947

CrossRef Full Text | Google Scholar

Hinton, G., Vinyals, O., and Dean, J. (2015). Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.

Google Scholar

Hinton, G. E., and Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science 313, 504–507. doi: 10.1126/science.1127647

PubMed Abstract | CrossRef Full Text | Google Scholar

Hinton, G. E., and Sejnowski, T. J. (1983). “Optimal perceptual inference,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR–83) (Washington, DC), 448–451.

Google Scholar

Hoefler, T., Alistarh, D., Ben-Nun, T., Dryden, N., and Peste, A. (2021). Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. arXiv preprint arXiv:2102.00554.

Google Scholar

Holdom, B. (1986). Two U(1)'s and epsilon charge shifts. Phys. Lett. 166B:196–198. doi: 10.1016/0370-2693(86)91377-8

CrossRef Full Text | Google Scholar

Holmes, A. J., Gibson, R. A. G., Hajto, J., Murray, A. F., Owen, A. E., Rose, M. J., et al. (1993). Use of a-Si:H memory devices for nonvolatile weight storage in artificial neural network. J. Non. Cryst. Solids. 164–166, 817–820. doi: 10.1016/0022-3093(93)91122-J

CrossRef Full Text | Google Scholar

Holstad, T. S., Ræder, T. M., Evans, D. M., Småbråten, D. R., Krohns, S., Schaab, J., et al. (2020). Application of a long short-term memory for deconvoluting conductance contributions at charged ferroelectric domain walls. Npj Comput. Mater. 6, 426. doi: 10.1038/s41524-020-00426-z

CrossRef Full Text | Google Scholar

Holz, C., and Wang, E. J. (2017). Glabella: continuously sensing blood pressure behavior using an unobtrusive wearable device. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 1–23. doi: 10.1145/3132024

CrossRef Full Text | Google Scholar

Horie, Y., Yoshio, T., Aoyama, K., Yoshimizu, S., Horiuchi, Y., Ishiyama, A., et al. (2019). Diagnostic outcomes of esophageal cancer by artificial intelligence using convolutional neural networks. Gastrointest. Endosc. 89, 25–32. doi: 10.1016/j.gie.2018.07.037

PubMed Abstract | CrossRef Full Text | Google Scholar

Hotchips (2019). Hotchips'2019 (hc31-k2): Dr. phillip wong (tsmc): What will the next node offer us?

Google Scholar

Hou, L., Yao, Q., and Kwok, J. T. (2016). Loss-aware binarization of deep networks. arXiv preprint arXiv:1611.01600.

Google Scholar

Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B., Tan, M., et al. (2019). “Searching for MobilenetV3,” in Proceedings of the IEEE International Conference on Computer Vision (Seoul: IEEE), 1314–1324.

Google Scholar

Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., et al. (2017). MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.

Google Scholar

HSA (2020). Heterogeneous System Architecture.

Google Scholar

Hu, J., Shen, L., and Sun, G. (2018a). “Squeeze-and-excitation networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Salt Lake City, UT), 7132–7141.

Google Scholar

Hu, M., et al. (2018b). Memristor-based analog computation and neural network classification with a dot product engine. Adv. Mater. 30, 1705914. doi: 10.1002/adma.201705914

PubMed Abstract | CrossRef Full Text | Google Scholar

Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. (2017). “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Honolulu, HI: IEEE), 4700–4708.

Google Scholar

Huang, H., Guo, S., Gui, G., Yang, Z., Zhang, J., Sari, H., et al. (2020). Deep learning for physical-layer 5G wireless techniques: opportunities, challenges and solutions. IEEE Wireless Commun. 27, 214–222. doi: 10.1109/MWC.2019.1900027

CrossRef Full Text | Google Scholar

Huang, Q., Wang, D., Dong, Z., Gao, Y., Cai, Y., Li, T., et al. (2021). “Codenet: Efficient deployment of input-adaptive object detection on embedded fpgas,” in The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 206–216.

Google Scholar

Huang, Z., and Wang, N. (2018). “Data-driven sparse structure selection for deep neural networks,” in Proceedings of the European Conference on Computer Vision (ECCV) (Munich), 304–320.

PubMed Abstract | Google Scholar

Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., and Bengio, Y. (2016). “Binarized neural networks,” in Advances in Neural Information Processing Systems (Barcelona), 4107–4115.

Google Scholar

Huertas-Company, M., Primack, J. R., Dekel, A., Koo, D. C., Lapiner, S., Ceverino, D., et al. (2018). Deep learning identifies high-z galaxies in a central blue nugget phase in a characteristic mass range. Astrophys. J. 858, 114. doi: 10.3847/1538-4357/aabfed

CrossRef Full Text | Google Scholar

Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., and Keutzer, K. (2016). SqueezeNet: alexnet-level accuracy with 50x fewer parameters and <0.5 mb model size. arXiv preprint arXiv:1602.07360.

Google Scholar

IBM (2018). Blog: Unlocking the Promise of Approximate Computing for on-Chip ai Accelerator. IBM Research Blog.

Idé, T., Raymond, R., and Phan, D. T. (2019). “Efficient protocol for collaborative dictionary learning in decentralized networks,” in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19 (Macao: International Joint Conferences on Artificial Intelligence Organization), 2585–2591.

Google Scholar

Iess, A., Cuoco, E., Morawski, F., and Powell, J. (2020). Core-collapse supernova gravitational-wave search and deep learning classification. Mach. Learn. Sci. Technol. 1, 025014. doi: 10.1088/2632-2153/ab7d31

CrossRef Full Text | Google Scholar

Iiyama, Y., Cerminara, G., Gupta, A., Kieseler, J., Loncar, V., Pierini, M., et al. (2021). Distance-weighted graph neural networks on fpgas for real-time particle reconstruction in high energy physics. Front. Big Data 3, 44. doi: 10.3389/fdata.2020.598927

PubMed Abstract | CrossRef Full Text | Google Scholar

Indiveri, G., Linares-Barranco, B., Hamilton, T. J., van Schaik, A., Etienne-Cummings, R., Delbruck, T., et al. (2011). Neuromorphic silicon neuron circuits. Front. Neurosci. 5, 73. doi: 10.3389/fnins.2011.00118

PubMed Abstract | CrossRef Full Text | Google Scholar

Intel (2020). What is oneapi?

Ioannou, Y., Robertson, D., Cipolla, R., and Criminisi, A. (2017). “Deep roots: Improving cnn efficiency with hierarchical filter groups,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Honolulu, HI: IEEE), 1231–1240.

Google Scholar

Ivezić, Ž., Kahn, S. M., Tyson, J. A., Abel, B., Acosta, E., Allsman, R., et al. (2019). LSST: From science drivers to reference design and anticipated data products. APJ 873, 111. doi: 10.3847/1538-4357/ab042c

CrossRef Full Text | Google Scholar

Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A., et al. (2018). “Quantization and training of neural networks for efficient integer-arithmetic-only inference,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Salt Lake City, UT: IEEE), 2704–2713.

PubMed Abstract | Google Scholar

Jacobsen, M., Richmond, D., Hogains, M., and Kastner, R. (2015). Riffa 2.1: a reusable integration framework for fpga accelerators. ACM Trans. Reconfigurable Technol. Syst. 8, 1–23. doi: 10.1145/2815631

CrossRef Full Text | Google Scholar

Jaeger, H. (2001). German National Research Center for Information Technology. Technical Report, Bonn, Germany.

Janka, R., Judd, R., Lebak, J., Richards, M., and Campbell, D. (2001). “Vsipl: an object-based open standard api for vector, signal, and image processing,” in 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221), Vol. 2 (Salt Lake City, UT: IEEE), 949–952.

Google Scholar

Jesse, S., Collins, L., Neumayer, S., Somnath, S., and Kalinin, S. V. (2018). “Dynamic modes in kelvin probe force microscopy: band excitation and G-Mode,” in Kelvin Probe Force Microscopy: From Single Charge Detection to Device Characterization, eds S. Sadewasser and T. Glatzel (Cham: Springer International Publishing), 49–99.

Google Scholar

Jesse, S., and Kalinin, S. V. (2011). Band excitation in scanning probe microscopy: sines of change. J. Phys. D Appl. Phys. 44, 464006. doi: 10.1088/0022-3727/44/46/464006

CrossRef Full Text | Google Scholar

Jesse, S., Kumar, A., Arruda, T. M., Kim, Y., Kalinin, S. V., and Ciucci, F. (2012). Electrochemical strain microscopy: Probing ionic and electrochemical phenomena in solids at the nanometer level. MRS Bull. 37, 651–658. doi: 10.1557/mrs.2012.144

CrossRef Full Text | Google Scholar

John, J. S., Herwig, C., Kafkes, D., Pellico, W. A., Perdue, G. N., Quintero-Parra, A., et al. (2021). Real-time artificial intelligence for accelerator control: a study at the fermilab booster. Phys. Rev. Accel. Beams 24, 104601. doi: 10.1103/PhysRevAccelBeams.24.104601

CrossRef Full Text | Google Scholar

Joshi, V., Gallo, M. L., Haefeli, S., Boybat, I., Nandakumar, S. R., Piveteau, C., et al. (2020). Accurate deep neural network inference using computational phase-change memory. Nat. Commun. 11, 2473. doi: 10.1038/s41467-020-16108-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Jouppi, N. P., Young, C., Patil, N., Patterson, D., Agrawal, G., Bajwa, R., et al. (2017). “In-datacenter performance analysis of a tensor processing unit,” in ISCA 2017 (New York, NY: ACM), 1–12.

Google Scholar

Ju, X., Farrell, S., Calafiura, P., Murnane, D., Prabhat, Gray L., et al. (2020). “Graph neural networks for particle reconstruction in high energy physics detectors,” in 33rd Annual Conference on Neural Information Processing Systems (Vancouver, BC).

Google Scholar

Judd, P., Albericio, J., Hetherington, T., Aamodt, T. M., and Moshovos, A. (2016). Stripes: Bit-serial deep neural network computing. MICRO 2016, 1–12. doi: 10.1109/MICRO.2016.7783722

CrossRef Full Text | Google Scholar

Jungman, G., Kamionkowski, M., and Griest, K. (1996). Supersymmetric dark matter. Phys. Rep. 267, 195–373. doi: 10.1016/0370-1573(95)00058-5

CrossRef Full Text | Google Scholar

Kaheman, K., Kaiser, E., Strom, B., Nathan Kutz, J., and Brunton, S. L. (2019). Learning Discrepancy Models From Experimental Data. Nice: Conference on Decision and Control 2019.

Google Scholar

Kaheman, K., Kutz, J. N., and Brunton, S. L. (2020). SINDy-PI: a robust algorithm for parallel implicit sparse identification of nonlinear dynamics. Proc. Math. Phys. Eng. Sci. 476, 20200279. doi: 10.1098/rspa.2020.0279

PubMed Abstract | CrossRef Full Text | Google Scholar

Kalinin, S. V., Kelley, K., Vasudevan, R. K., and Ziatdinov, M. (2021). Toward decoding the relationship between domain structure and functionality in ferroelectrics via hidden latent variables. ACS Appl. Mater. Interfaces 13, 1693–1703. doi: 10.1021/acsami.0c15085

PubMed Abstract | CrossRef Full Text | Google Scholar

Kalinin, S. V., Zhang, S., Valleti, M., Pyles, H., Baker, D., De Yoreo, J. J., et al. (2020). Exploring Particle Dynamics During Self-Organization Processes via Rotationally Invariant Latent Representations. Washington, DC: ACS Nano.

Google Scholar

Kamdar, H. M., Turk, M. J., and Brunner, R. J. (2016). Machine learning and cosmological simulations - I. Semi-analytical models. Mon. Not. R. Astron. Soc. 455, 642–658. doi: 10.1093/mnras/stv2310

CrossRef Full Text | Google Scholar

Kanerva, P. (2009). Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors. Cognit. Comput. 1, 139–159. doi: 10.1007/s12559-009-9009-8

CrossRef Full Text | Google Scholar

Kapre, N., and Bayliss, S. (2016). Survey of domain-specific languages for fpga computing. In 2016 26th International Conference on Field Programmable Logic and Applications (FPL), pages 1-12. doi: 10.1109/FPL.2016.7577380

CrossRef Full Text | Google Scholar

Kapre, N., and DeHon, A. (2009). Accelerating spice model-evaluation using fpgas. In 2009 17th IEEE Symposium on Field Programmable Custom Computing Machines (Napa, CA). doi: 10.1109/FCCM.2009.14

CrossRef Full Text | Google Scholar

Kapre, N., and DeHon, A. (2011). “Vliw-score: Beyond c for sequential control of spice fpga acceleration,” in 2011 International Conference on Field-Programmable Technology, 1–9.

Google Scholar

Karunaratne, G., Gallo, M., Cherubini, G., Benini, L., Rahimi, A., and Sebastian, A. (2020). In-memory hyperdimensional computing. Nat. Electron. 3, 327–337. doi: 10.1038/s41928-020-0410-3

CrossRef Full Text | Google Scholar

Kates-Harbeck, J., Kates-Harbeck, A., and Svyatkovskiy, W. T. (2019). Predicting disruptive instabilities in controlled fusion plasmas through deep learning. Nature 568, 526–531. doi: 10.1038/s41586-019-1116-4

PubMed Abstract | CrossRef Full Text

Kathail, V. (2020). “Xilinx vitis unified software platform,” in Proceedings of the 2020 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (Seaside, FL), 173–174.

Google Scholar

Kazakova, O., Puttock, R., Barton, C., Corte-León, H., Jaafar, M., Neu, V., et al. (2019). Frontiers of magnetic force microscopy. J. Appl. Phys. 125, 060901. doi: 10.1063/1.5050712

CrossRef Full Text

Kelly, P. J., and Arnell, R. D. (2000). Magnetron sputtering: a review of recent developments and applications. Vacuum 56, 159–172. doi: 10.1016/S0042-207X(99)00189-X

CrossRef Full Text | Google Scholar

Kesim, Y. E. (2019). Ph.D. Thesis, Carnegie Mellon University.

Khosa, C. K., Mars, L., Richards, J., and Sanz, V. (2020). Convolutional neural networks for direct detection of dark matter. J. Phys. G 47, 095201. doi: 10.1088/1361-6471/ab8e94

CrossRef Full Text | Google Scholar

Kim, H., Nili, H., Mahmoodi, M., and Strukov, D. (2019). 4K-memristor analog-grade passive crossbar circuit. ArXive Preprint, arXiv:1906.12045.

PubMed Abstract | Google Scholar

Kim, K., Harry, I. W., Hodge, K. A., Kim, Y.-M., Lee, C.-H., Lee, H. K., et al. (2015). Application of artificial neural network to search for gravitational-wave signals associated with short gamma-ray bursts. Classical Quant. Gravity 32, 245002. doi: 10.1088/0264-9381/32/24/245002

CrossRef Full Text | Google Scholar

Kim, K., Li, T. G. F., Lo, R. K. L., Sachdev, S., and Yuen, R. S. H. (2020). Ranking candidate signals with machine learning in low-latency searches for gravitational waves from compact binary mergers. Phys. Rev. D 101, 083006. doi: 10.1103/PhysRevD.101.083006

CrossRef Full Text | Google Scholar

Kim, S., Gholami, A., Yao, Z., Mahoney, M. W., and Keutzer, K. (2021). I-bert: Integer-only bert quantization. arXiv preprint arXiv:2101.01321.

Google Scholar

King, M., Hicks, J., and Ankcorn, J. (2015). “Software-driven hardware development,” in FPGA '15. Association for Computing Machinery (Monterey, CA).

Google Scholar

King, W. P. (2005). Design analysis of heated atomic force microscope cantilevers for nanotopography measurements. J. Micromech. Microeng. 15, 2441. doi: 10.1088/0960-1317/15/12/028

CrossRef Full Text | Google Scholar

Kistler, M. D., Haxton, W. C., and Yüksel, H. (2013). Tomography of massive stars from core collapse to supernova shock breakout. Astrophys. J. 778, 81. doi: 10.1088/0004-637X/778/1/81

CrossRef Full Text | Google Scholar

Klesges, R. C., Meyers, A. W., Klesges, L. M., and LaVasque, M. E. (1989). Smoking, body weight, and their effects on smoking behavior: a comprehensive review of the literature. Psychol. Bull. 106, 204. doi: 10.1037/0033-2909.106.2.204

PubMed Abstract | CrossRef Full Text | Google Scholar

Koeplinger, D., Feldman, M., Prabhakar, R., Zhang, Y., Hadjis, S., Fiszel, R., et al. (2018). Spatial: a language and compiler for application accelerators. SIGPLAN Not. doi: 10.1145/3192366.3192379

CrossRef Full Text | Google Scholar

Koeplinger, D., Prabhakar, R., Zhang, Y., Delimitrou, C., Kozyrakis, C., and Olukotun, K. (2016). “Automatic generation of efficient accelerators for reconfigurable hardware,” in 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).

Google Scholar

Kohler, E., Morris, R., Chen, B., Jannotti, J., and Kaashoek, M. F. (2000). The click modular router. ACM Trans. Comput. Syst. 18, 263–297. doi: 10.1145/354871.354874

CrossRef Full Text | Google Scholar

Komatsu, E., Smith, K. M., Dunkley, J., Bennett, C. L., Gold, B., Hinshaw, G., et al. (2011). Seven-year wilkinson microwave anisotropy probe (WMAP) observations: cosmological Interpretation. Astrophys. J. Suppl. Series 192, 18. doi: 10.1088/0067-0049/192/2/18

CrossRef Full Text | Google Scholar

Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, Vol. 25, eds F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (Red Hook, NY: Curran Associates, Inc.)

Google Scholar

Krupa, J. J., Lin, K., Flechas, M. A., Dinsmore, J., Duarte, J., Harris, P., et al. (2020). GPU coprocessors as a service for deep learning inference in high energy physics.

Google Scholar

Kuznetsov, V., Giommi, L., and Bonacorsi, D. (2020). Mlaas4hep: machine learning as a service for hep.

Google Scholar

Kuzum, D., Jeyasingh, R. G., Lee, B., and Wong, H.-S. P. (2011). Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing. Nano Lett. 12, 2179–2186. doi: 10.1021/nl201040y

PubMed Abstract | CrossRef Full Text | Google Scholar

Lai, Y.-H., Rong, H., Zheng, S., Zhang, W., Cui, X., Jia, Y., et al. (2020a). “Susy: a programming model for productive construction of high-performance systolic arrays on fpgas,” in 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD) (San Diego, CA: IEEE).

Google Scholar

Lai, Y.-T., Aoyama, M., Bae, H., Bähr, S., Chang, M.-C., Hayashii, H., et al. (2020b). Development of the level-1 track trigger with central drift chamber detector in belle II experiment and its performance in SuperKEKB 2019 phase 3 operation. J. Instrument. 15, C06063–C06063. doi: 10.1088/1748-0221/15/06/C06063

CrossRef Full Text | Google Scholar

Langereis, E., Knoops, H. C. M., Mackus, A. J. M., Roozeboom, F., Van de Sanden, M. C. M., and Kessels, W. M. M. (2007). Synthesis and in situ characterization of low-resistivity ta N x films by remote plasma atomic layer deposition. J. Appl. Phys. 102, 083517. doi: 10.1063/1.2798598

CrossRef Full Text | Google Scholar

Langner, S., Häse, F., Perea, J. D., Stubhan, T., Hauch, J., Roch, L. M., et al. (2020). Film fabrication techniques: Beyond ternary OPV: High-throughput experimentation and self-driving laboratories optimize multicomponent systems (adv. mater. 14/2020). Adv. Mater. 32, 2070110. doi: 10.1002/adma.202070110

CrossRef Full Text | Google Scholar

Lashkare, S., Chouhan, S., Chavan, T., Bhat, A., Kumbhare, P., and Ganguly, U. (2018). PCMO RRAM for integrate-and-fire neuron in spiking neural networks. IEEE Electron. Dev. Lett. 39, 484–487. doi: 10.1109/LED.2018.2805822

CrossRef Full Text | Google Scholar

Lattice (2018). Binarized neural network (bnn) accelerator ip.

LeCun, Y., Denker, J. S., and Solla, S. A. (1990). “Optimal brain damage,” in Advances in Neural Information Processing Systems (Denver, CO), 598–605.

Google Scholar

Lee, H., Lee, E.-J., Ham, S., Lee, H.-B., Lee, J. S., Kwon, S. U., et al. (2020). Machine learning approach to identify stroke within 4.5 hours. Stroke 51, 860–866. doi: 10.1161/STROKEAHA.119.027611

PubMed Abstract | CrossRef Full Text | Google Scholar

Lee, J. H., Ha, S., Choi, S., Lee, W.-J., and Lee, S. (2018a). Quantization for rapid deployment of deep neural networks. arXiv preprint arXiv:1810.05488.

Google Scholar

Lee, N., Ajanthan, T., and Torr, P. H. (2018b). Snip: Single-shot network pruning based on connection sensitivity. arXiv preprint arXiv:1810.02340.

Google Scholar

Lee, S.-T., Kim, H., Bae, J. H., Yoo, H., Choi, N. Y., Kwon, D., et al. (2019). “High-density and highly-reliable binary neural networks using NAND flash memory cells as synaptic devices,” in IEEE International Electron Devices Meeting (IEDM–19) (San Francisco, CA: IEEE), 38.4.1–38.4.4.

Google Scholar

Li, C., Liu, B., Kang, B., Liu, Z., Liu, Y., Chen, C., et al. (2020a). SciBet as a portable and fast single cell type identifier. Nat. Commun. 11, 1818. doi: 10.1038/s41467-020-15523-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, H., Kadav, A., Durdanovic, I., Samet, H., and Graf, H. P. (2016b). Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710.

Google Scholar

Li, H., Wu, T. F., Rahimi, A., Li, K. S., Rusch, M., Lin, C. H., et al. (2016a). “Hyperdimensional computing with 3D VRRAM in-memory kernels: device-architecture co-design for energy-efficient, error-resilient language recognition,” in IEEE International Electron Devices Meeting (IEDM'16) (San Francisco, CA: IEEE), 16.1.1-16.1.4.

Google Scholar

Li, T., Bai, D., Prioleau, T., Bui, N., Vu, T., and Zhou, X. (2020b). “Noninvasive glucose monitoring using polarized light,” in Proceedings of the 18th Conference on Embedded Networked Sensor Systems, 544–557.

PubMed Abstract | Google Scholar

Li, Y., Yang, J., Song, Y., Cao, L., Luo, J., and Li, L.-J. (2017). “Learning from noisy labels with distillation,” in Proceedings of the IEEE International Conference on Computer Vision (Venice: IEEE), 1910–1918.

Google Scholar

Liang, F., Shen, C., Yu, W., and Wu, F. (2020). Towards optimal power control via ensembling deep neural networks. IEEE Trans. Commun. 68, 1760–1776. doi: 10.1109/TCOMM.2019.2957482

CrossRef Full Text | Google Scholar

Liang, L., Ye, H., and Li, G. Y. (2019). Spectrum sharing in vehicular networks based on multi-agent reinforcement learning. IEEE J. Select. Areas Commun. 37, 2282–2292. doi: 10.1109/JSAC.2019.2933962

CrossRef Full Text | Google Scholar

Ligon, S. C., Liska, R., Stampfl, J., Gurr, M., and Mülhaupt, R. (2017). Polymers for 3D printing and customized additive manufacturing. Chem. Rev. 117, 10212–10290. doi: 10.1021/acs.chemrev.7b00074

PubMed Abstract | CrossRef Full Text | Google Scholar

Likharev, K. (2012). Superconductor digital electronics. Physica C 482, 6–18. doi: 10.1016/j.physc.2012.05.016

CrossRef Full Text | Google Scholar

Likharev, K. K., and Semenov, V. K. (1991). RSFQ logic/memory family: a new Josephson-junction technology for sub-terahertz-clock-frequency digital systems. IEEE Trans. Appl. Superconduct. 1, 3–28. doi: 10.1109/77.80745

CrossRef Full Text | Google Scholar

Lin, D., Talathi, S., and Annapureddy, S. (2016). “Fixed point quantization of deep convolutional networks,” in International Conference on Machine Learning (New York, NY: PMLR), 2849–2858.

Google Scholar

Lin, M., Ji, R., Wang, Y., Zhang, Y., Zhang, B., Tian, Y., et al. (2020a). “Hrank: filter pruning using high-rank feature map,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1529–1538.

Google Scholar

Lin, P., Li, C., Wang, Z., Li, Y., Jiang, H., Song, W., et al. (2020b). Three-dimensional memristor circuits as complex neural networks. Nat. Electron. 3, 225–232. doi: 10.1038/s41928-020-0397-9

CrossRef Full Text | Google Scholar

Lin, S., Ji, R., Li, Y., Wu, Y., Huang, F., and Zhang, B. (2018). “Accelerating convolutional networks via global &dynamic filter pruning,” in IJCAI (Stockholm), 2425–2432.

Google Scholar

Lin, X., et al. (2019). All optical machine learning using diffractive deep learning networks. Science 361, 1004–1008. doi: 10.1126/science.aat8084

PubMed Abstract | CrossRef Full Text | Google Scholar

Lin, Z., Courbariaux, M., Memisevic, R., and Bengio, Y. (2015). Neural networks with few multiplications. arXiv preprint arXiv:1510.03009.

Google Scholar

Lin, Z., Hahm, T. S., Lee, W. W., Tang, W. M., and White, R. B. (1998). Turbulent transport reduction by zonal flows: Massively parallel simulations. Science 281, 1835–1837. doi: 10.1126/science.281.5384.1835

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, H., Simonyan, K., and Yang, Y. (2018). Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055.

Google Scholar

Liu, L., Jiang, H., He, P., Chen, W., Liu, X., Gao, J., et al. (2020a). On the Variance of the Adaptive Learning Rate and Beyond. Eighth International Conference on Learning Representations.

PubMed Abstract | Google Scholar

Liu, Q., Gao, B., Yao, P., Wu, D., Chen, J., Pang, Y., et al. (2020b). “A fully integrated analog ReRAM based 78.4 TOps/W compute-in-memory chip with fully parallel MAC computing,” in IEEE International Solid-State Circuits Conference (ISSCC'20) (San Francisco, CA: IEEE), 500–501.

Google Scholar

Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., and Zhang, C. (2017). “Learning efficient convolutional networks through network slimming,” in Proceedings of the IEEE International Conference on Computer Vision (Venice: IEEE), 2736–2744.

Google Scholar

Loncar, V., Pierini, M., Summers, S., Di Guglielmo, G., Duarte, J., Harris, P., et al. (2020). Compressing deep neural networks on FPGAs to binary and ternary precision with hls4ml. Mach. Learn. 2, 015001. doi: 10.1088/2632-2153/aba042

CrossRef Full Text | Google Scholar

Lopes, R. G., Fenu, S., and Starner, T. (2017). Data-free knowledge distillation for deep neural networks. arXiv preprint arXiv:1710.07535.

Google Scholar

Luiz André Barroso, U. H. (2009). The datacenter as a computer: an introduction to the design of warehouse-scale machines. Synthesis Lectures Comput. Arch. 6, 108. doi: 10.2200/S00193ED1V01Y200905CAC006

CrossRef Full Text

Lukoǒevičius, M., and Jaeger, H. (2009). Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3, 127–149. doi: 10.1016/j.cosrev.2009.03.005

CrossRef Full Text | Google Scholar

Luo, J.-H., Wu, J., and Lin, W. (2017). “Thinet: a filter level pruning method for deep neural network compression,” in Proceedings of the IEEE International Conference on Computer Vision (Venice: IEEE), 5058–5066.

Google Scholar

Ma, N., Zhang, X., Zheng, H.-T., and Sun, J. (2018). “Shufflenet V2: practical guidelines for efficient cnn architecture design,” in Proceedings of the European Conference on Computer Vision (ECCV) (Munich), 116–131.

Google Scholar

Ma, X. (2020). Apollo: An adaptive parameter-wise diagonal quasi-newton method for nonconvex stochastic optimization. arXiv preprint arXiv:2009.13586.

Google Scholar

Maartens, R., Abdalla, F. B., Jarvis, M., and Santos, M. G. (2015). Cosmology with the SKA-overview. arXiv e-prints, arXiv:1501.04076. doi: 10.22323/1.215.0016

CrossRef Full Text | Google Scholar

Maass, W., Natschläger, T., and Markram, H. (2002). Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14, 2531–2560. doi: 10.1162/089976602760407955

PubMed Abstract | CrossRef Full Text | Google Scholar

Maass, W., Natschläger, T., and Markram, H. (2004). Fading memory and kernel properties of generic cortical microcircuit models. J. Physiol. 98, 315–330. doi: 10.1016/j.jphysparis.2005.09.020

PubMed Abstract | CrossRef Full Text | Google Scholar

MacLeod, B. P., Parlane, F. G. L., Morrissey, T. D., Häse, F., Roch, L. M., Dettelbach, K. E., et al. (2020). Self-driving laboratory for accelerated discovery of thin-film materials. Sci. Adv. 6, eaaz8867. doi: 10.1126/sciadv.aaz8867

PubMed Abstract | CrossRef Full Text | Google Scholar

Madysa, N. (2019). AREUS: A Software Framework for ATLAS Readout Electronics Upgrade Simulation. EPJ Web Conf. 214, 02006. doi: 10.1051/epjconf/201921402006

CrossRef Full Text | Google Scholar

Mahabal, A., Rebbapragada, U., Walters, R., Masci, F. J., Blagorodnova, N., van Roestel, J., et al. (2019). Machine learning for the zwicky transient facility. Publ. Astron. Soc. Pac. 131, 038002. doi: 10.1088/1538-3873/aaf3fa

CrossRef Full Text | Google Scholar

Mahmoodi, M., Kim, H., Fahimi, Z., Nili, H., Sedov, L., Polishchuk, V., et al. (2009). “An analog neuro-optimizer with adaptable annealing based on 64 × 64 0t1r crossbar circuit,” in IEEE International Electron Device Meeting (IEDM–19) (San Francisco, CA), 14.7.1–14.7.4.

Google Scholar

Mahmoodi, M., Prezioso, M., and Strukov, D. (2019). Versatile stochastic dot product circuits based on nonvolatile memories for high-performance neurocomputing and neurooptimization. Nat. Commun. 10, 5113. doi: 10.1038/s41467-019-13103-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Mamalet, F., and Garcia, C. (2012). “Simplifying convnets for fast learning,” in International Conference on Artificial Neural Networks (Lausanne: Springer), 58–65.

Google Scholar

Mao, Y., Wang, Y., Wu, C., Zhang, C., Wang, Y., Yang, Y., et al. (2020). Ladabert: Lightweight adaptation of bert through hybrid model compression. arXiv preprint arXiv:2004.04124. doi: 10.18653/v1/2020.coling-main.287

CrossRef Full Text | Google Scholar

Markovich, D., and Grolier, J. (2020). Quantum neuromorphic computing. Appl. Phys. Lett. 117, 150501. doi: 10.1063/5.0020014

CrossRef Full Text | Google Scholar

Marvel, R. E., Appavoo, K., Choi, B. K., Nag, J., and Haglund, R. F. (2013). Electron-beam deposition of vanadium dioxide thin films. Appl. Phys. A: Mater. Sci. Process. 111, 975–981. doi: 10.1007/s00339-012-7324-5

CrossRef Full Text | Google Scholar

McQuinn, M., Lidz, A., Zahn, O., Dutta, S., Hernquist, L., and Zaldarriaga, M. (2007). The morphology of HII regions during reionization. Mon. Not. R. Astron. Soc. 377, 1043–1063. doi: 10.1111/j.1365-2966.2007.11489.x

CrossRef Full Text | Google Scholar

Mead, C. (1990). Neuromorphic electronic systems. Proc. IEEE 78, 1629–1636. doi: 10.1109/5.58356

CrossRef Full Text | Google Scholar

Meller, E., Finkelstein, A., Almog, U., and Grobman, M. (2019). “Same, same but different: Recovering neural network quantization error through weight factorization,” in International Conference on Machine Learning (Long Beach, CA: PMLR), 4486–4495.

Google Scholar

Meng, F., Chen, P., Wu, L., and Cheng, J. (2020). Power allocation in multi-user cellular networks: deep reinforcement learning approaches. IEEE Trans. Wireless Commun. 19, 6255–6267. doi: 10.1109/TWC.2020.3001736

CrossRef Full Text | Google Scholar

MINERvA (2006). MINERvA. Technical Design Report.

Mirizzi, A., Tamborra, I., Janka, H.-T., Saviano, N., Scholberg, K., Bollig, R., et al. (2016). Supernova neutrinos: production, oscillations and detection. Riv. Nuovo Cim. 39, 1–112.

Google Scholar

Mishra, A., and Marr, D. (2017). Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy. arXiv preprint arXiv:1711.05852.

Google Scholar

Mishra, V., Pope, G., Lord, S., Lewia, S., Lowens, B., Caine, K., et al. (2020). Continuous detection of physiological stress with commodity hardware. ACM Trans. Comput. Healthcare 1, 1–30. doi: 10.1145/3361562

PubMed Abstract | CrossRef Full Text | Google Scholar

Mitra, A., Najjar, W., and Bhuyan, L. (2007). “Compiling pcre to fpga for accelerating snort ids,” in ANCS '07 (New York, NY: Association for Computing Machinery), 127–136.

Google Scholar

Möller, A., and de Boissiére, T. (2019). Supernnova: an open-source framework for bayesian, neural network-based supernova classification. Mon. Not. R. Astron Soc. 491, 4277–4293. doi: 10.1093/mnras/stz3312

CrossRef Full Text | Google Scholar

Möller, A., Peloton, J., Ishida, E. E. O., Arnault, C., Bachelet, E., Blaineau, T., et al. (2020). fink, a new generation of broker for the LSST community. Mon. Not. R. Astron. Soc. 501, 3272–3288. doi: 10.1093/mnras/staa3602

CrossRef Full Text | Google Scholar

Moons, B., Bankman, D., Yang, L., Murmann, B., and Verhelst, M. (2018). “Binareye: an always-on energy-accuracy-scalable binary cnn processor with all memory on chip in 28nm cmos,” in Custom Integrated Circuits Conference (CICC), 2018 IEEE (San Diego, CA: IEEE), 1–4.

Google Scholar

Moons, B., Uytterhoeven, R., Dehaene, W., and Verhelst, M. (2017). “Envision: a 26-to-10 TOps/w subword-parallel dynamic-voltage-accuracy-frequency-scalable convolutional neural network processor in 28nm FDSOI,” in IEEE International Solid-State Circuits Conference (ISSCC–17) (San Francisco, CA: IEEE), 246–257.

Google Scholar

Moreau, T., Chen, T., and Ceze, L. (2018). “Leveraging the vta-tvm hardware-software stack for fpga acceleration of 8-bit resnet-18 inference,” in Proceedings of the 1st on Reproducible Quality-Efficient Systems Tournament on Co-designing Pareto-efficient Deep Learning (Williamsburg: ReQuEST).

Google Scholar

Moreno, E. A., Vlimant, J.-R., Spiropulu, M., Borzyszkowski, B., and Pierini, M. (2021). Source-agnostic gravitational-wave detection with recurrent autoencoders. arXiv preprint arXiv:2107.12698.

Google Scholar

Morocho-Cayamcela, M. E., Lee, H., and Lim, W. (2019). Machine learning for 5G/B5G mobile and wireless communications: Potential, limitations, and future directions. IEEE Access 7, 137184–137206. doi: 10.1109/ACCESS.2019.2942390

CrossRef Full Text | Google Scholar

Mozer, M. C., and Smolensky, P. (1988). “Skeletonization: a technique for trimming the fat from a network via relevance assessment,” in Proceedings of the 1st International Conference on Neural Information Processing Systems (Cambridge, MA), 107–115.

Google Scholar

Mueller, R., Teubner, J., and Alonso, G. (2010). “Glacier: a query-to-hardware compiler,” in SIGMOD '10. Association for Computing Machinery (Indianapolis, IN).

Google Scholar

Musella, P., and Pandolfi, F. (2018). Fast and accurate simulation of particle detectors using generative adversarial networks. Comput. Softw. Big Sci. 2, 8. doi: 10.1007/s41781-018-0015-y

CrossRef Full Text

Nafee, T., Gibson, C. M., Travis, R., Yee, M. K., Kerneis, M., Chi, G., et al. (2020). Machine learning to predict venous thrombosis in acutely ill medical patients. Res. Pract. Thromb. Haemost. 4, 230–237. doi: 10.1002/rth2.12292

PubMed Abstract | CrossRef Full Text | Google Scholar

Nagel, M., Baalen, M. v., Blankevoort, T., and Welling, M. (2019). “Data-free quantization through weight equalization and bias correction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (Seoul,: IEEE), 1325–1334.

Google Scholar

Nahum-Shani, I., Smith, S. N., Spring, B. J., Collins, L. M., Witkiewitz, K., Tewari, A., et al. (2018). Just-in-time adaptive interventions (jitais) in mobile health: key components and design principles for ongoing health behavior support. Ann. Behav. Med. 52, 446–462. doi: 10.1007/s12160-016-9830-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Nane, R., Sima, V., Pilato, C., Choi, J., Fort, B., Canis, A., et al. (2016). A survey and evaluation of fpga high-level synthesis tools. IEEE Trans. Comput. Aided Design Integr. Circ. Syst. 35, 1591–1604. doi: 10.1109/TCAD.2015.2513673

CrossRef Full Text | Google Scholar

Narayan, G., Zaidi, T., Soraisam, M. D., Wang, Z., Lochner, M., Matheson, T., et al. (2018). Machine-learning-based Brokers for Real-time Classification of the LSST Alert Stream. Astrophys. J. Suppl. Series 236, 9. doi: 10.3847/1538-4365/aab781

CrossRef Full Text | Google Scholar

Nasir, Y. S., and Guo, D. (2020). “Deep actor-critic learning for distributed power control in wireless mobile networks,” in Proceedings of Asilomar Conference Signals Systems Computers (Pacific Grove, CA).

Google Scholar

National Academies of Sciences Engineering and Medicine (2018). An Assessment of U.S.-Based Electron-Ion Collider Science. Washington, DC: The National Academies Press.

Google Scholar

Nayak, G. K., Mopuri, K. R., Shaj, V., Radhakrishnan, V. B., and Chakraborty, A. (2019). “Zero-shot knowledge distillation in deep networks,” in International Conference on Machine Learning (Long Beach, CA: PMLR), 4743–4751.

Google Scholar

Neftci, E. O., Mostafa, H., and Zenke, F. (2019). Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Process Mag. 36, 51–63. doi: 10.1109/MSP.2019.2931595

CrossRef Full Text | Google Scholar

Ni, K., Smith, J. A., Grisafe, B., Rakshit, T., Obradovic, B., Kittl, J. A., et al. (2018). “SoC logic compatible multi-bit FeMFET weight cell for neuromorphic applications,” in IEEE International Electron Devices Meeting (IEDM–18) (San Francisco, CA), 13.2.1–13.2.4.

Google Scholar

Nigam, R., Thomas, S., Li, Z., and Sampson, A. (2021). “A compiler infrastructure for accelerator generators,” in Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2021 (Association for Computing Machinery).

Google Scholar

Niknam, S., Dhillon, H. S., and Reed, J. H. (2020). Federated learning for wireless communications: motivation, opportunities, and challenges. IEEE Commun. Mag. 58, 46–51. doi: 10.1109/MCOM.001.1900461

CrossRef Full Text | Google Scholar

Nitta, N., Sugimura, T., Isozaki, A., Mikami, H., Hiraki, K., Sakuma, S., et al. (2018). Intelligent image-activated cell sorting. Cell 175, 266.e13–276.e13. doi: 10.1016/j.cell.2018.08.028

PubMed Abstract | CrossRef Full Text | Google Scholar

Nogueira-Rodríguez, A., Domínguez-Carbajales, R., López-Fernández, H., Iglesias, Á., Cubiella, J., Fdez-Riverola, F., et al. (2020). Deep neural networks approaches for detecting and classifying colorectal polyps. Neurocomputing 423, 721–734. doi: 10.1016/j.neucom.2020.02.123

CrossRef Full Text | Google Scholar

Nordin, G., Milder, P., Hoe, J., and Puschel, M. (2005). “Automatic generation of customized discrete fourier transform ips,” in Proceedings 42nd Design Automation Conference 2005 (Anaheim, CA), 471–474.

Google Scholar

Nottbeck, N., Schmitt, C., and Büscher, V. (2019). Implementation of high-performance, sub-microsecond deep neural networks on FPGAs for trigger applications. JINST 14, P09014. doi: 10.1088/1748-0221/14/09/P09014

CrossRef Full Text | Google Scholar

Nurvitadhi, E., Weisz, G., Wang, Y., Hurkat, S., Nguyen, M., Hoe, J. C., et al. (2014). “Graphgen: an fpga framework for vertex-centric graph computation,” in 2014 IEEE 22nd Annual International Symposium on Field-Programmable Custom Computing Machines (Boston, MA: IEEE).

Google Scholar

Oberhauser, A. F., Badilla-Fernandez, C., Carrion-Vazquez, M., and Fernandez, J. M. (2002). The mechanical hierarchies of fibronectin observed with single-molecule AFM. J. Mol. Biol. 319, 43. doi: 10.1016/S0022-2836(02)00306-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Ohno, T., Hasegawa, T., Tsuruoka, T., Terabe, K., Gimzewski, J. K., and Aono, M. (2011). Short-term plasticity and long-term potentiation mimicked in single inorganic synapses. Nat. Mater. 10, 591–595. doi: 10.1038/nmat3054

PubMed Abstract | CrossRef Full Text | Google Scholar

Ojeda, -G.-P A, Döbeli, M., and Lippert, T. (2018). Influence of plume properties on thin film composition in pulsed laser deposition. Adv. Mater. Interfaces 5, 1701062. doi: 10.1002/admi.201701062

CrossRef Full Text | Google Scholar

Ojeda, -G.-P A, Schneider, C. W., Döbeli, M., Lippert, T., and Wokaun, A. (2017). Plasma plume dynamics, rebound, and recoating of the ablation target in pulsed laser deposition. J. Appl. Phys. 121, 135306. doi: 10.1063/1.4979780

CrossRef Full Text | Google Scholar

OPE (2021). Fermilab operations department booster rookie book.

Ophus, C. (2019). Four-dimensional scanning transmission electron microscopy (4d-stem): From scanning nanodiffraction to ptychography and beyond. Microsc. Microanal. 25, 563–582. doi: 10.1017/S1431927619000497

PubMed Abstract | CrossRef Full Text | Google Scholar

Ormiston, R., Nguyen, T., Coughlin, M., Adhikari, R. X., and Katsavounidis, E. (2020). Noise reduction in gravitational-wave data via deep learning. Phys. Rev. Res. 2, 033066. doi: 10.1103/PhysRevResearch.2.033066

CrossRef Full Text | Google Scholar

Ostwal, V., Debashis, P., Faria, R., Chen, Z., and Appenzeller, J. (2018). Spin-torque devices with hard axis initialization as stochastic binary neurons. Nat. Sci. Rep. 8, 16689. doi: 10.1038/s41598-018-34996-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Oxley, M. P., Ziatdinov, M., Dyck, O., Lupini, A. R., Vasudevan, R., and Kalinin, S. V. (2020). Probing atomic-scale symmetry breaking by rotationally invariant machine learning of multidimensional electron scattering. 7, 65. doi: 10.1038/s41524-021-00527-3

CrossRef Full Text | Google Scholar

Paganini, M., de Oliveira, L., and Nachman, B. (2018a). Accelerating science with generative adversarial networks: an application to 3D particle showers in multilayer calorimeters. Phys. Rev. Lett. 120, 042003. doi: 10.1103/PhysRevLett.120.042003

PubMed Abstract | CrossRef Full Text | Google Scholar

Paganini, M., de Oliveira, L., and Nachman, B. (2018b). CaloGAN : Simulating 3D high energy particle showers in multilayer electromagnetic calorimeters with generative adversarial networks. Phys. Rev. D 97, 014021. doi: 10.1103/PhysRevD.97.014021

CrossRef Full Text | Google Scholar

Papadimitrioua, M., Fumeroa, J., Stratikopoulosa, A., Zakkakb, F. S., and Kotselidisa, C. (2012). Transparent Compiler and Runtime Specializations for Accelerating Managed Languages on fpgas. ACM.

Google Scholar

Papakonstantinou, A., Gururaj, K., Stratton, J. A., Chen, D., Cong, J., and Hwu, W.-M. W. (2009). “Fcuda: enabling efficient compilation of cuda kernels onto fpgas,” in 2009 IEEE 7th Symposium on Application Specific Processors (San Francisco, CA: IEEE).

Google Scholar

Parashar, A., and Rhu, M. (2017). “Scnn: an accelerator for compressed-sparse convolutional neural networks,” in International Symposium on Computer Architecture (ISCA) (Toronto, ON: IEEE), 27–40.

Google Scholar

Parate, A., Chiu, M.-C., Chadowitz, C., Ganesan, D., and Kalogerakis, E. (2014). “Risq: recognizing smoking gestures with inertial sensors on a wristband,” in Proceedings of the 12th Annual International Conference on Mobile Systems, Applications, and Services (Hampshire), 149–161.

PubMed Abstract | Google Scholar

Parekh, D. P., Ladd, C., Panich, L., Moussa, K., and Dickey, M. D. (2016). 3D printing of liquid metals as fugitive inks for fabrication of 3D microfluidic channels. Lab. Chip 16, 1812–1820. doi: 10.1039/C6LC00198J

PubMed Abstract | CrossRef Full Text | Google Scholar

Park, J.-H., and Sudarshan, T. S. (2001). Chemical Vapor Deposition. Materials Park: ASM International.

Google Scholar

Park, S., Lee, J., Mo, S., and Shin, J. (2020). Lookahead: a far-sighted alternative of magnitude-based pruning. arXiv preprint arXiv:2002.04809.

Google Scholar

Park, W., Kim, D., Lu, Y., and Cho, M. (2019). “Relational knowledge distillation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (Long Beach, CA), 3967–3976.

Google Scholar

Pata, J., Duarte, J., Vlimant, J.-R., Pierini, M., and Spiropulu, M. (2021). MLPF: efficient machine-learned particle-flow reconstruction using graph neural networks.

Google Scholar

Peccei, R. D. (2008). The Strong CP problem and Axions, Vol. 741. Berlin; Heidelberg: Springer.

Google Scholar

Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., et al. (2011). Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825–2830.

Google Scholar

Pedretti, G., Milo, V., Ambrogio, S., Carboni, R., Bianchi, S., Calderoni, A., et al. (2017). Memristive neural network for on-line learning and tracking with brain-inspired spike timing dependent plasticity. Nat. Sci. Rep. 7, 5288. doi: 10.1038/s41598-017-05480-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Pell, O., and Mencer, O. (2011). Surviving the end of frequency scaling with reconfigurable dataflow computing. SIGARCH Comput. Archit. News 39, 60–65. doi: 10.1145/2082156.2082172

CrossRef Full Text | Google Scholar

Percival, W. J., Reid, B. A., Eisenstein, D. J., Bahcall, N. A., Budavari, T., Frieman, J. A., et al. (2010). Baryon acoustic oscillations in the Sloan Digital Sky Survey Data Release 7 galaxy sample. Mon. Not. R. Astron. Soc. 401, 2148–2168. doi: 10.1111/j.1365-2966.2009.15812.x

CrossRef Full Text | Google Scholar

Perdue, G. N., Ghosh, A., Wospakrik, M., Akbar, F., Andrade, D. A., Ascencio, M., et al. (2018). Reducing model bias in a deep learning classifier using domain adversarial neural networks in the minerva experiment. J. Instrument. 13, P11020–P11020. doi: 10.1088/1748-0221/13/11/P11020

CrossRef Full Text | Google Scholar

Perlmutter, S., Aldering, G., Goldhaber, G., Knop, R. A., Nugent, P., Castro, P. G., et al. (1999). Measurements of Ω and Λ from 42 High-Redshift Supernovae. Astrophys. J. 517, 565–586. doi: 10.1086/307221

CrossRef Full Text | Google Scholar

Perraudin, N., Defferrard, M., Kacprzak, T., and Sgier, R. (2019). Deepsphere: Efficient spherical convolutional neural network with healpix sampling for cosmological applications. Astron. Comput. 27, 130–146. doi: 10.1016/j.ascom.2019.03.004

CrossRef Full Text | Google Scholar

Petroff, M. A., Addison, G. E., Bennett, C. L., and Weiland, J. L. (2020). Full-sky cosmic microwave background foreground cleaning using machine learning. Astrophys. J. 903, 104. doi: 10.3847/1538-4357/abb9a7

CrossRef Full Text | Google Scholar

Pezzullo (2020). CTD2020: The Track Finder Algorithm for the Trigger System of the Mu2e Experiment at Fermilab. Zenodo.

Google Scholar

Pezzullo, G. (2017). The Mu2e experiment at Fermilab: a search for lepton flavor violation. Nuclear Particle Phys. Proc. 285–286, 3–7. doi: 10.1016/j.nuclphysbps.2017.03.002

CrossRef Full Text | Google Scholar

Pham, H., Guan, M., Zoph, B., Le, Q., and Dean, J. (2018). “Efficient neural architecture search via parameters sharing,” in International Conference on Machine Learning (Stockholm: PMLR), 4095–4104.

PubMed Abstract | Google Scholar

Pham, N., Dinh, T., Raghebi, Z., Kim, T., Bui, N., Nguyen, P., et al. (2020). “Wake: a behind-the-ear wearable system for microsleep detection,” in Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services (Toronto, ON), 404–418.

Google Scholar

Phothilimthana, P. M., Liu, M., Kaufmann, A., Peter, S., Bodik, R., and Anderson, T. (2018). “Floem: a programming system for nic-accelerated network applications,” in Proceedings of the 13th USENIX Conference on Operating Systems Design and Implementation (Carlsbad, CA: USENIX Association).

Google Scholar

Pickett, M. D., Medeiros-Ribeiro, G., and Williams, R. S. (2013). A scalable neuristor built with Mott memristors. Nat. Mater. 12, 114–117. doi: 10.1038/nmat3510

PubMed Abstract | CrossRef Full Text | Google Scholar

Pierini, M., Duarte, J. M., Tran, N., and Freytsis, M. (2020). hls4ml LHC Jet Dataset (150 Particles). Geneva: Zenodo, CERN.

Planck Collaboration, Aghanim, N., Akrami, Y., Ashdown, M., Aumont, J., Baccigalupi, C., Ballardini, M., et al. (2020). Planck 2018 results. VI. Cosmological parameters. Astron. Astrophys. 641, A6. doi: 10.1051/0004-6361/201833910

CrossRef Full Text | Google Scholar

Planck Collaboration, Aghanim, N., Arnaud, M., Ashdown, M., Aumont, J., Baccigalupi, C., Banday, A. J., et al. (2016). Planck 2015 results. XI. CMB power spectra, likelihoods, and robustness of parameters. Astron. Astrophys. 594:A11. doi: 10.1051/0004-6361/201526926

CrossRef Full Text | Google Scholar

Polino, A., Pascanu, R., and Alistarh, D. (2018). Model compression via distillation and quantization. arXiv preprint arXiv:1802.05668.

Google Scholar

Prezioso, M., Bayat, F. M., Hoskins, B. D., Likharev, K. K., and Strukov, D. (2016). Self-adaptive spike-time-dependent plasticity of metal-oxide memristors. Nat. Sci. Rep. 6, 21331. doi: 10.1038/srep21331

PubMed Abstract | CrossRef Full Text

Prezioso, M., Kataeva, I., Merrikh-Bayat, F., Hoskins, B., Adam, G., Sota, T., et al. (2015). “Modelling and implementation of firing-rate neuromorphic-network classifiers with bilayer Pt/Al2O3/TiO2x/Pt memristors,” in IEEE International Electron Device Meeting (IEDM–15) (Washington, DC), 17.4.1–17.4.4.

Google Scholar

Prezioso, M., Mahmoodi, M., Bayat, F. M., Kim, H., Nili, H., Vincent, A., et al. (2018). Spike-timing-dependent plasticity learning of coincidence detection with passively integrated memristive circuits. Nat. Commun. 9, 5311. doi: 10.1038/s41467-018-07757-y

PubMed Abstract | CrossRef Full Text

Pritchard, J. R., and Loeb, A. (2012). 21 cm cosmology in the 21st century. Rep. Progr. Phys. 75, 086901. doi: 10.1088/0034-4885/75/8/086901

PubMed Abstract | CrossRef Full Text | Google Scholar

Provence, S. R., Thapa, S., Paudel, R., Truttmann, T. K., Prakash, A., Jalan, B., et al. (2020). Machine learning analysis of perovskite oxides grown by molecular beam epitaxy. Phys. Rev. Mater. 4, 083807. doi: 10.1103/PhysRevMaterials.4.083807

CrossRef Full Text | Google Scholar

Psihas, F., Groh, M., Tunnell, C., and Warburton, K. (2020). A review on machine learning for neutrino experiments. Int. J. Modern Phys. A 35, 2043005. doi: 10.1142/S0217751X20430058

CrossRef Full Text | Google Scholar

Qasim, S. R., Kieseler, J., Iiyama, Y., and Pierini, M. (2019). Learning representations of irregular particle-detector geometry with distance-weighted graph networks. Eur. Phys. J. C 79, 608. doi: 10.1140/epjc/s10052-019-7113-9

CrossRef Full Text | Google Scholar

Qian, Z., et al. (2021). Vertex and Energy Reconstruction in JUNO with Machine Learning Methods.

Google Scholar

Que, Z., Wang, E., Marikar, U., Moreno, E., Ngadiuba, J., Javed, H., et al. (2021). Accelerating recurrent neural networks for gravitational wave experiments. arXiv preprint arXiv:2106.14089. doi: 10.1109/ASAP52443.2021.00025

CrossRef Full Text | Google Scholar

Raina, R., Madhavan, A., and Ng, A. Y. (2009). “Large-scale deep unsupervised learning using graphics processors,” in Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09 (New York, NY: Association for Computing Machinery), 873.

Google Scholar

Raissi, M., Perdikaris, P., and Karniadakis, G. E. (2019). Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378, 686–707. doi: 10.1016/j.jcp.2018.10.045

CrossRef Full Text | Google Scholar

Rajbhandari, S., Rasley, J., Ruwase, O., and He, Y. (2020). “ZeRO: memory optimizations toward training trillion parameter models,” in International Conference for High Performance Computing, Networking, Storage and Analysis (SC–20), 20.

Ramprasad, R., Batra, R., Pilania, G., Mannodi-Kanakkithodi, A., and Kim, C. (2017). Machine learning in materials informatics: recent applications and prospects. Npj Comput. Mater. 3, 54. doi: 10.1038/s41524-017-0056-5

CrossRef Full Text | Google Scholar

Rankin, D. S., et al. (2020). FPGAs-as-a-Service Toolkit (FaaST).

Rastegari, M., Ordonez, V., Redmon, J., and Farhadi, A. (2016). “XNOR-Net: imagenet classification using binary convolutional neural networks,” in European Conference on Computer Vision (Springer), 525–542.

Google Scholar

Razzano, M., and Cuoco, E. (2018). Image-based deep learning for classification of noise transients in gravitational wave detectors. Classical Quant. Gravity 35, 095016. doi: 10.1088/1361-6382/aab793

CrossRef Full Text | Google Scholar

Reddi, S., Zaheer, M., Sachan, D., Kale, S., and Kumar, S. (2018). “Adaptive methods for nonconvex optimization,” in Proceeding of 32nd Conference on Neural Information Processing Systems (NIPS 2018) (Montréal, QC).

PubMed Abstract | Google Scholar

Reiche, O., Özkan, M. A., Membarth, R., Teich, J., and Hannig, F. (2017). “Generating fpga-based image processing accelerators with hipacc: (invited paper),” in 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) (Irvine, CA: IEEE).

Google Scholar

Reinders, J., Ashbaugh, B., Brodman, J., Kinsner, M., Pennycook, J., and Tian, X. (2020). Data Parallel C++: Mastering DPC++ for Programming of Heterogeneous Systems using C++ and SYCL. Apress.

Google Scholar

Ren, H. (2014). “A brief introduction on contemporary high-level synthesis,” in 2014 IEEE International Conference on IC Design Technology (Austin, TX), 1–4.

Google Scholar

Ren, J., He, Y., Wen, D., Yu, G., Huang, K., and Guo, D. (2020). Scheduling for cellular federated edge learning with importance and channel awareness. IEEE Trans. Wirel. Commun. 19, 7690–7703. doi: 10.1109/TWC.2020.3015671

CrossRef Full Text | Google Scholar

Richter, W. (1990). M. a. herman, h. sitter: Molecular Beam Epitaxy, Fundamentals and Current Status, Vol. 7 aus der reihe: Springer Series in Materials Science. Berlin: Springer-Verlag.

Google Scholar

Rieke, N., Hancox, J., Li, W., Milletari, F., Roth, H. R., Albarqouni, S., et al. (2020). The future of digital health with federated learning. NPJ Digit. Med. 3, 1–7. doi: 10.1038/s41746-020-00323-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Riess, A. G., Filippenko, A. V., Challis, P., Clocchiatti, A., Diercks, A., Garnavich, P. M., et al. (1998). Observational evidence from supernovae for an accelerating universe and a cosmological constant. Astron. J. 116, 1009–1038. doi: 10.1086/300499

CrossRef Full Text | Google Scholar

Ríos, C., Youngblood, N., Cheng, Z., Le Gallo, M., Pernice, W. H. P., Wright, D., et al. (2019). In-memory computing on a photonic platform. Sci. Adv. 5, eaau5759. doi: 10.1126/sciadv.aau5759

PubMed Abstract | CrossRef Full Text | Google Scholar

Rodríguez, A. C., Kacprzak, T., Lucchi, A., Amara, A., Sgier, R., Fluri, J., et al. (2018). Fast cosmic web simulations with generative adversarial networks. Comput. Astrophys. Cosmol. 5, 4. doi: 10.1186/s40668-018-0026-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Rolls, E. T., and Deco, G. (2010). Oxford University Press.

Romera, M., Talatchian, P., Tsunegi, S., Araujo, F. A., Cros, V., Bortolotti, P., et al. (2018). Vowel recognition with a four coupled spin-torque nano-oscillators. Nature 563, 230–234. doi: 10.1038/s41586-018-0632-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Romero, A., Ballas, N., Kahou, S. E., Chassang, A., Gatta, C., and Bengio, Y. (2014). Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550.

Google Scholar

Rowlands, G. E., Nguyen, M.-H., Ribeill, G. J., Wagner, A. P., Govia, L., Barbosa, W., et al. (2021). Reservoir computing with superconducting electronics. ArXive Preprint, arXiv:2103.02522.

Google Scholar

Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2015). Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211. doi: 10.1007/s11263-015-0816-y

CrossRef Full Text | Google Scholar

Sadek, I., Rehman, S. U., Codjo, J., and Abdulrazak, B. (2019). “Privacy and security of iot based healthcare systems: concerns, solutions, and recommendations,” in How AI Impacts Urban Living and Public Health, eds J. Pagán, M. Mokhtari, H. Aloulou, B. Abdulrazak, and M. F. Cabrera (Cham: Springer International Publishing), 3–17.

Google Scholar

Saighi, S., Mayr, C. G., Serrano-Gotarredona, T., Schmidt, H., Lecerf, G., Tomas, J., et al. (2015). Plasticity in memristive devices for spiking neural networks. Front. Neurosci. 9, 5. doi: 10.3389/fnins.2015.00051

PubMed Abstract | CrossRef Full Text

Sakellaropoulos, T., Vougas, K., Narang, S., Koinis, F., Kotsinas, A., Polyzos, A., et al. (2019). A deep learning framework for predicting response to therapy in cancer. Cell. Rep. 29, 3367.e4–3373.e4. doi: 10.1016/j.celrep.2019.11.017

PubMed Abstract | CrossRef Full Text | Google Scholar

Sanderson, R. E., Bellini, A., Casertano, S., Lu, J. R., Melchior, P., Libralato, M., et al. (2019). Astrometry with the wide-field infrared space telescope. J. Astron. Telesc. Instrument. Syst. 5, 044005. doi: 10.1117/1.JATIS.5.4.044005

CrossRef Full Text | Google Scholar

Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.-C. (2018). “MobilenetV2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Salt Lake City, UT: IEEE), 4510–4520.

Google Scholar

Sathyaprakash, B. S., and Dhurandhar, S. V. (1991). Choice of filters for the detection of gravitational waves from coalescing binaries. Phys. Rev. D 44, 3819–3834. doi: 10.1103/PhysRevD.44.3819

PubMed Abstract | CrossRef Full Text | Google Scholar

Sato, K., et al. (2017). An in-depth look at google's first tensor processing unit (tpu).

Satpathy, S., Mohan, P., Das, S., and Debbarma, S. (2020). A new healthcare diagnosis system using an IoT-based fuzzy classifier with FPGA. J. Supercomput. 76, 5849–5861. doi: 10.1007/s11227-019-03013-2

CrossRef Full Text | Google Scholar

Savard, C. (2020). Level 1 Trigger Track Quality Machine Learning Models on FPGAs for the Phase 2 Upgrade of the CMS Experiment. Dallas, TX: Fast Machine Learning for Science Workshop.

Schkufza, E., Wei, M., and Rossbach, C. J. (2019). “Just-in-time compilation for verilog: a new technique for improving the fpga programming experience,” in Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems (Providence, RI: Association for Computing Machinery).

Google Scholar

Schmidt, J., Marques, M. R. G., Botti, S., and Marques, M. A. L. (2019). Recent advances and applications of machine learning in solid-state materials science. npj Comput. Mater. 5, 83. doi: 10.1038/s41524-019-0221-0

CrossRef Full Text

Scholberg, K. (2012). Supernova neutrino detection. Ann. Rev. Nucl. Part. Sci. 62, 81–103. doi: 10.1146/annurev-nucl-102711-095006

CrossRef Full Text | Google Scholar

Schumann, M. (2019). Direct detection of WIMP dark matter: concepts and status. J. Phys. G 46, 103003. doi: 10.1088/1361-6471/ab2ea5

CrossRef Full Text | Google Scholar

Scolnic, D. M., Jones, D. O., Rest, A., Pan, Y. C., Chornock, R., Foley, R. J., et al. (2018). The complete light-curve sample of spectroscopically confirmed SNe Ia from Pan-STARRS1 and cosmological constraints from the combined pantheon sample. Astrophys. J., 859, 101. doi: 10.3847/1538-4357/aab9bb

CrossRef Full Text | Google Scholar

Segal, O., Margala, M., Chalamalasetti, S. r., and Wright, M. (2014). “High level programming for heterogeneous architectures„” in 1st International Workshop on FPGAs for Software Programmers (Munich: FSP).

Google Scholar

Segall, K., Legro, M., Kaplan, S., Svitelskiy, O., Khadka, S., Crotty, P., et al. (2017). Synchronization dynamics on the picosecond time scale in coupled Josephson junction networks. Phys. Rev. E 95, 032220. doi: 10.1103/PhysRevE.95.032220

PubMed Abstract | CrossRef Full Text | Google Scholar

Seidel, J., Maksymovych, P., Batra, Y., Katan, A., Yang, S.-Y., He, Q., et al. (2010). Domain wall conductivity in la-doped BiFeO3. Phys. Rev. Lett. 105, 197603. doi: 10.1103/PhysRevLett.105.197603

PubMed Abstract | CrossRef Full Text | Google Scholar

Seiya, K., Hazelwood, K. J., Ibrahim, M. A., Nagaslaev, V. P., Nicklaus, D. J., Schupbach, B. A., et al. (2021). Accelerator real-time edge ai for distributed systems (reads) proposal.

Google Scholar

Sen, S., Subbaraju, V., Misra, A., Balan, R., and Lee, Y. (2020). Annapurna: an automated smartwatch-based eating detection and food journaling system. Pervasive Mob Comput. 68, 101259. doi: 10.1016/j.pmcj.2020.101259

CrossRef Full Text | Google Scholar

Sengupta, A., Panda, P., Wijesinghe, P., Kim, Y., and Roy, K. (2016). Magnetic tunnel junction mimics stochastic cortical spiking neurons. Nat. Sci. Rep. 6, 30039. doi: 10.1038/srep30039

PubMed Abstract | CrossRef Full Text | Google Scholar

Serrano-Gotarredona, T., Masquelier, T., Prodromakis, T., Indiveri, G., and Linares-Barranco, B. (2013). STDP and STDP variations with memristors for spiking neuromorphic learning systems. Front. Neurosci. 7, 2. doi: 10.3389/fnins.2013.00002

PubMed Abstract | CrossRef Full Text | Google Scholar

Shasti, B. J., et al. (2021). Photonics for artificial intelligence and neuromorphic computing. Nat. Photonics. 15, 102–114. doi: 10.1038/s41566-020-00754-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Shazeer, N., and Stern, M. (2018). “Adafactor: adaptive learning rates with sublinear memory cost,” in International Conference on Machine Learning (Stockholm), 4596–4604.

Google Scholar

Shen, H., Huerta, E. A., Zhao, Z., Jennings, E., and Sharma, H. (2019). Deterministic and Bayesian Neural Networks for Low-Latency Gravitational Wave Parameter Estimation of Binary Black Hole Mergers.

Google Scholar

Shen, S., Dong, Z., Ye, J., Ma, L., Yao, Z., Gholami, A., et al. (2020). Q-bert: Hessian based ultra low precision quantization of bert. Proc. AAAI Conf. Artif. Intell. 34, 8815–8821. doi: 10.1609/aaai.v34i05.6409

CrossRef Full Text | Google Scholar

Shen, Y., et al. (2017). Deep learning with coherent nanophotonic circuits. Nat. Photonics 11, 441–447. doi: 10.1038/nphoton.2017.93

CrossRef Full Text | Google Scholar

Sheridan, P. M., Cai, F., Du, C., Ma, W., Zhang, Z., and Lu, W. D. (2017). Sparse coding with memristor networks. Nat. Nanotechnol. 12, 784–789. doi: 10.1038/nnano.2017.83

PubMed Abstract | CrossRef Full Text | Google Scholar

Shukla, S., Hassan, M. F., Khan, M. K., Jung, L. T., and Awang, A. (2019). An analytical model to minimize the latency in healthcare internet-of-things in fog computing environment. PLoS ONE 14, e0224934. doi: 10.1371/journal.pone.0224934

PubMed Abstract | CrossRef Full Text | Google Scholar

Simola, U., Pelssers, B., Barge, D., Conrad, J., and Corander, J. (2019). Machine learning accelerated likelihood-free event reconstruction in dark matter direct detection. J. Instrument. 14, P03004–P03004. doi: 10.1088/1748-0221/14/03/P03004

CrossRef Full Text | Google Scholar

Simons, T., and Lee, D. (2019). A review of binarized neural networks. Electronics 8, 661. doi: 10.3390/electronics8060661

CrossRef Full Text | Google Scholar

Singh, S., and Greaves, D. J. (2008). “Kiwi: synthesis of fpga circuits from parallel programs,” in 2008 16th International Symposium on Field-Programmable Custom Computing Machines (Stanford, CA: IEEE), 3–12.

Google Scholar

Sirunyan, A. M., Tumasyan, A., Adam, W., Asilar, E., Bergauer, T., Brandstetter, J., et al. (2017). Particle-flow reconstruction and global event description with the CMS detector. JINST 12, P10003. doi: 10.1088/1748-0221/12/10/P10003

CrossRef Full Text | Google Scholar

Siu, D. M. D., Lee, K. C. M., Lo, M. C. K., Stassen, S. V., Wang, M., Zhang, I. Z. Q., et al. (2020). Deep-learning-assisted biophysical imaging cytometry at massive throughput delineates cell population heterogeneity. Lab. Chip. 20, 3696–3708. doi: 10.1039/D0LC00542H

PubMed Abstract | CrossRef Full Text | Google Scholar

Skambraks, S., Bähr, S., Becker, J., Kiesling, C., McCarney, S., Meggendorfer, F., et al. (2020). A 3d track finder for the belle II CDC l1 trigger. J. Phys. 1525, 012102. doi: 10.1088/1742-6596/1525/1/012102

CrossRef Full Text | Google Scholar

Skillman, A., and Edso, T. (2020). “A technical overview of cortex-m55 and ethos-u55: Arm's most capable processors for endpoint ai,” in 2020 IEEE Hot Chips 32 Symposium (HCS) (Palo Alto, CA: IEEE Computer Society), 1–20.

Google Scholar

Smidt, T. E. (2020). Euclidean symmetry and equivariance in machine learning. Trends Chem. 3, 82–85. doi: 10.26434/chemrxiv.12935198.v1

CrossRef Full Text | Google Scholar

Smidt, T. E., Geiger, M., and Miller, B. K. (2021). Finding symmetry breaking order parameters with euclidean neural networks. Phys. Rev. Res. 3, L012002. doi: 10.1103/PhysRevResearch.3.L012002

CrossRef Full Text | Google Scholar

Sokol, M. C., McGuigan, K. A., Verbrugge, R. R., and Epstein, R. S. (2005). Impact of medication adherence on hospitalization risk and healthcare cost. Med. Care 43, 521–530. doi: 10.1097/01.mlr.0000163641.86870.af

PubMed Abstract | CrossRef Full Text | Google Scholar

Somnath, S., Belianinov, A., Kalinin, S. V., and Jesse, S. (2015). Full information acquisition in piezoresponse force microscopy. Appl. Phys. Lett. 107, 263102. doi: 10.1063/1.4938482

PubMed Abstract | CrossRef Full Text | Google Scholar

Spergel, D. N., Verde, L., Peiris, H. V., Komatsu, E., Nolta, M. R., Bennett, C. L., et al. (2003). First-year wilkinson microwave anisotropy probe (WMAP) observations: determination of cosmological parameters. Astrophys. J. Suppl. Series 148, 175–194. doi: 10.1086/377226

CrossRef Full Text | Google Scholar

Stein, G. (2020). georgestein/ml-in-Cosmology: Machine Learning in Cosmology, Github.

Stewart, R., Duncan, K., Michaelson, G., Garcia, P., Bhowmik, D., and Wallace, A. (2018). Ripl: A Parallel Image Processing Language for Fpgas. New York, NY: Association for Computing Machinery.

Sujeeth, A. K., Lee, H., Brown, K. J., Chafi, H., Wu, M., Atreya, A. R., et al. (2011). “Optiml: an implicitly parallel domain-specific language for machine learning,” in Proceedings of the 28th International Conference on International Conference on Machine Learning (Bellevue, WA).

Google Scholar

Summers, S., Di Guglielmob, G., Duartec, J., Hand, S., Harrisd, P., Jindarianic, S., et al. (2020). Fast inference of Boosted Decision Trees in FPGAs for particle physics. J. Instrum. 15, P05026. doi: 10.1088/1748-0221/15/05/P05026

CrossRef Full Text | Google Scholar

Sun, H., Chen, X., Shi, Q., Hong, M., Fu, X., and Sidiropoulos, N. D. (2018). Learning to optimize: training deep neural networks for interference management. IEEE Trans. Signal Process. 66, 5438–5453. doi: 10.1109/TSP.2018.2866382

CrossRef Full Text | Google Scholar

Suyu, S. H., Bonvin, V., Courbin, F., Fassnacht, C. D., Rusu, C. E., Sluse, D., et al. (2017). H0LiCOW - I. H0 Lenses in COSMOGRAIL's Wellspring: program overview. Mon. Not. R. Astron. Soc. 468, 2590–2604. doi: 10.1093/mnras/stx483

CrossRef Full Text | Google Scholar

Svrcek, P., and Witten, E. (2006). Axions in string theory. JHEP 06, 051. doi: 10.1088/1126-6708/2006/06/051

CrossRef Full Text

Szydagis, M., et al. (2021). A review of basic energy reconstruction techniques in liquid xenon and argon detectors for dark matter and neutrino physics using nest.

Google Scholar

Tait, A. N., Wu, A. X., de Lima, T. F., Zhou, E., Shastri, B. J., Nahmi, M. A., et al. (2016). Microring weight banks. IEEE J. Select. Top. Quant. Electron. 22, 312–325. doi: 10.1109/JSTQE.2016.2573583

CrossRef Full Text | Google Scholar

Tan, M., Chen, B., Pang, R., Vasudevan, V., Sandler, M., Howard, A., et al. (2019). “Mnasnet: Platform-aware neural architecture search for mobile,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (Long Beach, CA: IEEE), 2820–2828.

Google Scholar

Tanaka, G., Yamane, T., Hroux, J. B., Nakane, R., Kanazawa, N., Takeda, S., et al. (2019). Recent advances in physical reservoir computing: a review. Neural Netw. 115, 100–123. doi: 10.1016/j.neunet.2019.03.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Tang, W.-B., Ji, Y., Zhang, M.-M., Chen, Z.-Y., Xu, Y.-Y., and Wang, Y.-W. (2018). A rapid detection method for morphological characteristics of biological cells based on phase imaging. Biomed. Res. Int. 2018, 4651639. doi: 10.1155/2018/4651639

PubMed Abstract | CrossRef Full Text | Google Scholar

Tarvainen, A., and Valpola, H. (2017). Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. arXiv preprint arXiv:1703.01780.

Google Scholar

Tavanaei, A., Ghodrati, M., Kheradpisheh, S., Masquelier, T., and Maida, A. (2019). Deep learning in spiking neural networks. Neural Netw. 111, 47–63. doi: 10.1016/j.neunet.2018.12.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Teich, P. (2018). Tearing Apart Google's tpu 3.0 ai Coprocessor. High Point: Stackhouse Publishing Inc.

Termopoli, V., Torrisi, E., Famiglini, G., Palma, P., Zappia, G., Cappiello, A., et al. (2019). Mass spectrometry based approach for organic synthesis monitoring. Anal. Chem. 91, 11916–11922. doi: 10.1021/acs.analchem.9b02681

PubMed Abstract | CrossRef Full Text | Google Scholar

Thais, S., and DeZoort, G. (2021). Instance Segmentation Gnns for One-Shot Conformal Tracking at the LHC. 34th Conference on Neural Information Processing Systems.

Google Scholar

Thakur, C. S., Molin, J. L., Cauwenberghs, G., Indiveri, G., Kumar, K., Qiao, N., et al. (2018). Large-scale neuromorphic spiking array processors: a quest to mimic the brain. Front. Neurosci. 12, 891. doi: 10.3389/fnins.2018.00891

PubMed Abstract | CrossRef Full Text | Google Scholar

Therhaag, J., and Team, T. C. D. (2012). Tmva - toolkit for multivariate data analysis. AIP Conf. Proc. 1504, 1013–1016. doi: 10.1063/1.4771869

PubMed Abstract | CrossRef Full Text | Google Scholar

Thomas, J. G., and Bond, D. S. (2015). Behavioral response to a just-in-time adaptive intervention (jitai) to reduce sedentary behavior in obese adults: Implications for jitai optimization. Health Psychol. 34, 1261. doi: 10.1037/hea0000304

PubMed Abstract | CrossRef Full Text | Google Scholar

Thomas, J. M. (1999). Design, synthesis, and in situ characterization of new solid catalysts. Angew. Chem. Int. Ed Engl. 38, 3588–3628. doi: 10.1002/(SICI)1521-3773(19991216)38:24<3588::AID-ANIE3588>3.0.CO;2-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Todman, T., Constantinides, G., Wilton, S., Mencer, O., Luk, W., and Cheung, P. (2005). Reconfigurable computing: architectures and design methods. Comput. Digit. Techn. IEEE Proc. 152, 193–207. doi: 10.1049/ip-cdt:20045086

PubMed Abstract | CrossRef Full Text | Google Scholar

Tonutti, M., Gras, G., and Yang, G.-Z. (2017). A machine learning approach for real-time modelling of tissue deformation in image-guided neurosurgery. Artif. Intell. Med. 80, 39–47. doi: 10.1016/j.artmed.2017.07.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Trejo, O., Dadlani, A. L., De La Paz, F., Acharya, S., Kravec, R., Nordlund, D., et al. (2019). Elucidating the evolving atomic structure in atomic layer deposition reactions with in situ XANES and machine learning. Chem. Mater. 31, 8937–8947. doi: 10.1021/acs.chemmater.9b03025

CrossRef Full Text | Google Scholar

Trigub, M. V., Platonov, V. V., Evtushenko, G. S., Osipov, V. V., and Evtushenko, T. G. (2017). Laser monitors for high speed imaging of materials modification and production. Vacuum 143, 486–490. doi: 10.1016/j.vacuum.2017.03.016

CrossRef Full Text | Google Scholar

Trimberger, S. M. S. (2018). Three ages of fpgas: a retrospective on the first thirty years of fpga technology: this paper reflects on how moore's law has driven the design of fpgas through three epochs: the age of invention, the age of expansion, and the age of accumulation. IEEE Solid State Circ. Mag. 10, 16–29. doi: 10.1109/MSSC.2018.2822862

CrossRef Full Text | Google Scholar

Tsaris, A., Anderson, D., Bendavid, J., Calafiura, P., Cerati, G., Esseiva, J., et al. (2018). The HEP.TrkX project: deep learning for particle tracking. J. Phys. 1085, 042023. doi: 10.1088/1742-6596/1085/4/042023

CrossRef Full Text | Google Scholar

Tuma, T., Pantazi, A., Gallo, M., Sebastian, A., and Eleftheriou, E. (2016). Stochastic phase-change neurons. Nat. Nanotechnol. 11, 693–699. doi: 10.1038/nnano.2016.70

PubMed Abstract | CrossRef Full Text | Google Scholar

Turing (2019). Nvidia Turing gpu architecture.

Umuroglu, Y., Fraser, N. J., Gambardella, G., Blott, M., Leong, P., Jahre, M., et al. (2017). “Finn: a framework for fast, scalable binarized neural network inference,” in Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (Monterey, CA), 65–74.

Google Scholar

Umuroglu, Y., and Jahre, M. (2017). Streamlined deployment for quantized neural networks. CoRR, abs/1709.04060.

Google Scholar

Umuroglu, Y., Rasnayake, L., and Sjalander, M. (2018). Bismo: A scalable bit-serial matrix multiplication overlay for reconfigurable computing. arXiv preprint arXiv:1806.08862. doi: 10.1109/FPL.2018.00059

CrossRef Full Text | Google Scholar

University, S. M. (2020). Fast Machine Learning for Science Workshop. Southern Methodist University.

Vajente, G., Huang, Y., Isi, M., Driggers, J. C., Kissel, J. S., Szczepańczyk, M. J., et al. (2020). Machine-learning nonstationary noise out of gravitational-wave detectors. Phys. Rev. D 101, 042003. doi: 10.1103/PhysRevD.101.042003

CrossRef Full Text | Google Scholar

Vandoorne, K., Mechet, P., Van Vaerenbergh, T., Fiers, M., Morthier, G., Verstraeten, D., et al. (2014). Experimental demonstration of reservoir computing on a silicon photonics chip. Nat. Commun. 5, 3541. doi: 10.1038/ncomms4541

PubMed Abstract | CrossRef Full Text | Google Scholar

Vaseghi, S. V. (2001). Wiener Filters. Hoboken, NJ: John Wiley & Sons, Ltd.

Google Scholar

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., et al. (2017). “Attention is all you need,” in International Conference on Neural Information Processing Systems (NIPS'17) (Long Beach, CA), 6000–6010.

Google Scholar

Villaescusa-Navarro, F., Anglés-Alcázar, D., Genel, S., Spergel, D. N., Somerville, R. S., Dave, R., et al. (2020). The CAMELS project: Cosmology and Astrophysics with MachinE Learning Simulations. arXiv e-prints, arXiv:2010.00619. doi: 10.3847/1538-4357/abf7ba

CrossRef Full Text | Google Scholar

Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., et al. (2019). Grandmaster level in StarCraft ii using multi-agent reinforcement learning. Nature 575, 350–354. doi: 10.1038/s41586-019-1724-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Vipin, K., Shreejith, S., Gunasekera, D., Fahmy, S. A., and Kapre, N. (2013). “System-level fpga device driver with high-level synthesis support,” in 2013 International Conference on Field-Programmable Technology (FPT) (Kyoto), 128–135.

Google Scholar

Visser, C. W., Pohl, R., Sun, C., Römer, G.-W., Huis in ‘t Veld, B., and Lohse, D. (2015). Toward 3D printing of pure metals by laser-induced forward transfer. Adv. Mater. 27, 4087–4092. doi: 10.1002/adma.201501058

PubMed Abstract | CrossRef Full Text | Google Scholar

Vo, K., Pham, D., Nguyen, M., Mai, T., and Quan, T. (2017). “Combination of domain knowledge and deep learning for sentiment analysis,” in Multi-disciplinary Trends in Artificial Intelligence, eds S. Phon-Amnuaisuk, S. P. Ang, and S. Y. Lee (Springer International Publishing), 162–173.

Google Scholar

Volkov, M., Hashimoto, D. A., Rosman, G., Meireles, O. R., and Rus, D. (2017). “Machine learning and coresets for automated real-time video segmentation of laparoscopic and robot-assisted surgery,” in 2017 IEEE International Conference on Robotics and Automation (ICRA) (Singapore: IEEE).

Google Scholar

Wang, C., Grosse, R., Fidler, S., and Zhang, G. (2019a). Eigendamage: structured pruning in the kronecker-factored eigenbasis. arXiv preprint arXiv:1905.05934.

Google Scholar

Wang, K., Liu, Z., Lin, Y., Lin, J., and Han, S. (2019b). “HAQ: hardware-aware automated quantization,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Long Beach, CA).

Google Scholar

Wang, K., Liu, Z., Lin, Y., Lin, J., and Han, S. (2019c). “Haq: hardware-aware automated quantization with mixed precision,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (Long Beach, CA: IEEE), 8612–8620.

Google Scholar

Wang, M., Yang, T., Acosta Flechas, M., Harris, P., Hawks, B., Holzman, B., et al. (2020a). GPU-accelerated machine learning inference as a service for computing in neutrino experiments.

PubMed Abstract | Google Scholar

Wang, S., Zhou, Y., Qin, X., Nair, S., Huang, X., and Liu, Y. (2020b). Label-free detection of rare circulating tumor cells by image analysis and machine learning. Sci. Rep. 10, 12226. doi: 10.1038/s41598-020-69056-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, T., Lu, X., and Wang, A. (2020c). A review: 3D printing of microwave absorption ceramics. Int. J. Appl. Ceram. Technol. 17, 2477–2491. doi: 10.1111/ijac.13604

CrossRef Full Text | Google Scholar

Wang, X., Han, Y., Leung, V. C. M., Niyato, D., Yan, X., and Chen, X. (2020d). Convergence of edge computing and deep learning: a comprehensive survey. IEEE Commun. Surveys Tutorials 22, 869–904. doi: 10.1109/COMST.2020.2970550

CrossRef Full Text | Google Scholar

Wang, X., Zhang, R., Sun, Y., and Qi, J. (2018a). “Kdgan: knowledge distillation with generative adversarial networks,” in NeurIPS, 783–794.

Google Scholar

Wang, Y., Gonzalez-Garcia, A., Berga, D., Herranz, L., Khan, F. S., Weijer, J., et al. (2020e). “Minegan: effective knowledge transfer from gans to target domains with few images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (Seattle, WA: IEEE), 9332–9341.

Google Scholar

Wang, Y., Sun, Y., Liu, Z., Sarma, S. E., Bronstein, M. M., and Solomon, J. M. (2018b). Dynamic graph CNN for learning on point clouds. CoRR, abs/1801.07829.

Google Scholar

Wang, Z., Joshi, S., Savelév, S., Song, W., Midya, R., Li, Y., et al. (2018c). Fully memristive neural networks for pattern classification with unsupervised learning. Nat. Electron. 1, 137–145. doi: 10.1038/s41928-018-0023-2

CrossRef Full Text | Google Scholar

Wang, Z., Joshi, S., Savel'ev, S. E., Jiang, H., Midya, R., Lin, P., et al. (2017). Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing. Nat. Mater. 16, 101–108. doi: 10.1038/nmat4756

PubMed Abstract | CrossRef Full Text | Google Scholar

Weinberg, D. H., Bullock, J. S., Governato, F., Kuzio de Naray, R., and Peter, A. H. G. (2015). Cold dark matter: controversies on small scales. Proc. Natl. Acad. Sci. U.S.A. 112, 12249–12255. doi: 10.1073/pnas.1308716112

PubMed Abstract | CrossRef Full Text | Google Scholar

White, A., and Hingson, R. (2013). The burden of alcohol use: excessive alcohol consumption and related consequences among college students. Alcohol Res. 35, 201–218. Available online at: https://psycnet.apa.org/record/2014-07285-012

PubMed Abstract | Google Scholar

Widrow, B., and Angel, J. B. (1962). Reliable, trainable networks for computing and control. Aerospace Eng. 21, 78–123.

Wu, B., Dai, X., Zhang, P., Wang, Y., Sun, F., Wu, Y., et al. (2019). F“BNet: hardware-aware efficient convnet design via differentiable neural architecture search,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Long Beach, CA), 10734–10742.

Google Scholar

Wu, B., Wan, A., Yue, X., Jin, P., Zhao, S., Golmant, N., et al. (2018a). “Shift: a zero flop, zero parameter alternative to spatial convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Salt Lake City, UT), 9127–9135.

Google Scholar

Wu, B., Wang, Y., Zhang, P., Tian, Y., Vajda, P., and Keutzer, K. (2018b). Mixed precision quantization of convnets via differentiable neural architecture search. arXiv preprint arXiv:1812.00090.

Google Scholar

Wu, S. L., Sun, S., Guan, W., Zhou, C., Chan, J., Cheng, C. L., et al. (2021). Application of quantum machine learning using the quantum kernel algorithm on high energy physics analysis at the lhc. Phys. Rev. Res. 3, 033221. doi: 10.1103/PhysRevResearch.3.033221

CrossRef Full Text | Google Scholar

Xiao, X., Wang, Z., and Rajasekaran, S. (2019). “Autoprune: automatic network pruning by regularizing auxiliary parameters,” in Advances in Neural Information Processing Systems (Vancouver, BC), 13681–13691.

Google Scholar

Xie, X., Niu, J., Liu, X., Chen, Z., Tang, S., and Yu, S. (2021). A survey on incorporating domain knowledge into deep learning for medical image analysis. Med. Image Anal. 101985–101985. doi: 10.1016/j.media.2021.101985

PubMed Abstract | CrossRef Full Text | Google Scholar

Xilinx (2020a). Sdaccel Development Environment, Advanced Micro Devices, Inc. (accessed February 16, 2020).

Google Scholar

Xilinx (2020b). Sdsoc Development Environment, Advanced Micro Devices, Inc. (accessed February 16, 2020).

Google Scholar

Xilinx (2020c). Vitis Unified Software Platform Overview, Advanced Micro Devices, Inc. (accessed February 16, 2020).

Google Scholar

Xilinx (2020d). What's an Acap Adaptive Compute Acceleration Platform, Advanced Micro Devices, Inc. (accessed February 16, 2020).

Google Scholar

Xilinx (2021). Xilinx Runtime (xrt) Architecture, Advanced Micro Devices, Inc. (accessed February 16, 2020).

Google Scholar

Yamamoto, Y., Leleu, T., Ganguli, S., and Mabuchi, H. (2020). Coherent ising machines–quantum optics and neural network perspectives. Appl. Phys. Lett. 117, 160501. doi: 10.1063/5.0016140

CrossRef Full Text | Google Scholar

Yang, J., Strukov, D., and Stewart, D. (2013). Memristive devices for computing. Nat. Nanotechnol. 8, 13–24. doi: 10.1038/nnano.2012.240

PubMed Abstract | CrossRef Full Text | Google Scholar

Yang, T., and Sze, V. (2019). “Design considerations for efficient deep neural networks on processing-in-memory accelerators,” in IEEE International Electron Device Meeting (IEDM–19) (San Francisco, CA: IEEE), 22.1.1–22.1.4.

Google Scholar

Yao, P., Wu, H., Gao, B., Tang, J., Zhang, Q., Zhang, W., et al. (2020a). Fully hardware-implemented memristor convolutional neural network. Nature. 577, 641–646. doi: 10.1038/s41586-020-1942-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Yao, Z., Dong, Z., Zheng, Z., Gholami, A., Yu, J., Tan, E., et al. (2020b). HAWQV3: Dyadic neural network quantization. arXiv preprint arXiv:2011.10680.

Google Scholar

Yao, Z., Gholami, A., Keutzer, K., and Mahoney, M. (2019). Pyhessian: Neural networks through the lens of the hessian. arXiv preprint arXiv:1912.07145. doi: 10.1109/BigData50022.2020.9378171

PubMed Abstract | CrossRef Full Text | Google Scholar

Yao, Z., Gholami, A., Shen, S., Keutzer, K., and Mahoney, M. W. (2020c). Adahessian: An adaptive second order optimizer for machine learning. arXiv preprint arXiv:2006.00719.

Google Scholar

Yeon, H., Lin, P., Choi, C., Tan, S. H., Park, Y., Lee, D., et al. (2020). Alloying conducting channels for reliable neuromorphic computing. Nat. Nanotechnol. 15, 574–579. doi: 10.1038/s41565-020-0694-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Yim, J., Joo, D., Bae, J., and Kim, J. (2017). “A gift from knowledge distillation: Fast optimization, network minimization and transfer learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Honolulu, HI: IEEE), 4133–4141.

Google Scholar

Yin, H., Molchanov, P., Alvarez, J. M., Li, Z., Mallya, A., Hoiem, D., et al. (2020). “Dreaming to distill: Data-free knowledge transfer via deepinversion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (Seattle, WA), 8715–8724.

Google Scholar

Yoshino, Y., Makino, T., Katayama, Y., and Hata, T. (2000). Optimization of zinc oxide thin film for surface acoustic wave filters by radio frequency sputtering. Vacuum 59, 538–545. doi: 10.1016/S0042-207X(00)00313-4

CrossRef Full Text | Google Scholar

You, S., Xu, C., Xu, C., and Tao, D. (2017). “Learning from multiple teacher networks,” in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Halifax, NS), 1285–1294.

Google Scholar

Yu, R., Li, A., Chen, C.-F., Lai, J.-H., Morariu, V. I., Han, X., et al. (2018). “Nisp: pruning networks using neuron importance score propagation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Salt Lake City, UT), 9194–9203.

Google Scholar

Yu, S. (2018). Neuro-inspired computing with emerging nonvolatile memories. Proc. IEEE 106, 260–285. doi: 10.1109/JPROC.2018.2790840

CrossRef Full Text | Google Scholar

Yu, S., Yao, Z., Gholami, A., Dong, Z., Mahoney, M. W., and Keutzer, K. (2021). Hessian-aware pruning and optimal neural implant. arXiv preprint arXiv:2101.08940.

Google Scholar

Zarek, M., Layani, M., Cooperstein, I., Sachyani, E., Cohn, D., and Magdassi, S. (2016). 3D printing of shape memory polymers for flexible electronic devices. Adv. Mater. 28, 4449–4454. doi: 10.1002/adma.201503132

PubMed Abstract | CrossRef Full Text | Google Scholar

Zevin, M., Coughlin, S., Bahaadini, S., Besler, E., Rohani, N., Allen, S., et al. (2017). Gravity spy: integrating advanced ligo detector characterization, machine learning, and citizen science. Classical Quant. Gravity 34, 064003. doi: 10.1088/1361-6382/aa5cea

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, D., Yang, J., Ye, D., and Hua, G. (2018a). “Lq-nets: Learned quantization for highly accurate and compact deep neural networks,” in European Conference on Computer Vision (ECCV) (Munich).

Google Scholar

Zhang, K., Zhu, Y., Leng, S., He, Y., Maharjan, S., and Zhang, Y. (2019a). Deep learning empowered task offloading for mobile edge computing in urban informatics. IEEE Internet Things J. 6, 7635–7647. doi: 10.1109/JIOT.2019.2903191

CrossRef Full Text | Google Scholar

Zhang, L., Song, J., Gao, A., Chen, J., Bao, C., and Ma, K. (2019b). “Be your own teacher: Improve the performance of convolutional neural networks via self distillation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (Seoul), 3713–3722.

Google Scholar

Zhang, M. R., Lucas, J., Hinton, G., and Ba, J. (2019c). Lookahead optimizer: k steps forward, 1 step back.

Google Scholar

Zhang, S., Cao, J., Zhang, Q., Zhang, Q., Zhang, Y., and Wang, Y. (2020a). “An FPGA-based reconfigurable CNN accelerator for YOLO,” in 2020 IEEE 3rd International Conference on Electronics Technology (ICET) (Chengdu: IEEE), 74–78.

Google Scholar

Zhang, S., Du, Z., Zhang, L., Lan, H., Liu, S., Li, L., et al. (2016). “Cambricon-x: an accelerator for sparse neural networks,” in International Symposium on Microarchitecture (Taipei: IEEE Press), 20.

Google Scholar

Zhang, S., Zhao, Y., Nguyen, D. T., Xu, R., Sen, S., Hester, J., et al. (2020b). Necksense: a multi-sensor necklace for detecting eating activities in free-living conditions. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 4, 1. doi: 10.1145/3397313

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, Y., Chen, G., Du, H., Yuan, X., Kadoch, M., and Cheriet, M. (2020c). Real-Time remote health monitoring system driven by 5G MEC-IoT. Electronics 9, 1753. doi: 10.3390/electronics9111753

CrossRef Full Text | Google Scholar

Zhang, Y., He, W., Wu, Y., Huang, K., Shen, Y., Su, J., et al. (2018b). Highly compact artificial memristive neuron with low energy consumption. Small 14, 1802188. doi: 10.1002/smll.201802188

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhao, C., Ni, B., Zhang, J., Zhao, Q., Zhang, W., and Tian, Q. (2019a). “Variational convolutional neural network pruning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Long Beach, CA: IEEE), 2780–2789.

PubMed Abstract | Google Scholar

Zhao, N., Liang, Y.-C., Niyato, D., Pei, Y., Wu, M., and Jiang, Y. (2019b). Deep reinforcement learning for user association and resource allocation in heterogeneous cellular networks. IEEE Trans. Wireless Commun. 18, 5141–5152. doi: 10.1109/TWC.2019.2933417

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhao, R., Hu, Y., Dotzel, J., De Sa, C., and Zhang, Z. (2019c). Improving neural network quantization without retraining using outlier channel splitting. Proc. Mach. Learn. Res.

Google Scholar

Zhou, A., Yao, A., Guo, Y., Xu, L., and Chen, Y. (2017). Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044.

Google Scholar

Zhou, A., Yao, A., Wang, K., and Chen, Y. (2018a). “Explicit loss-error-aware quantization for low-bit deep neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Salt Lake City, UT), 9426–9435.

Google Scholar

Zhou, S., Wu, Y., Ni, Z., Zhou, X., Wen, H., and Zou, Y. (2016). Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160.

Google Scholar

Zhou, Y., Moosavi-Dezfooli, S.-M., Cheung, N.-M., and Frossard, P. (2018b). “Adaptive quantization for deep neural network,” in Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32 (New Orleans, CA).

Google Scholar

Zhu, C., Han, S., Mao, H., and Dally, W. J. (2016). Trained ternary quantization. arXiv preprint arXiv:1612.01064.

Google Scholar

Zhu, G., Liu, D., Du, Y., You, C., Zhang, J., and Huang, K. (2020). Toward an intelligent edge: wireless communication meets machine learning. IEEE Commun. Mag. 58, 19–25. doi: 10.1109/MCOM.001.1900103

CrossRef Full Text | Google Scholar

Zhuang, J., Tang, T., Ding, Y., Tatikonda, S., Dvornek, N., Papademetris, X., et al. (2020). AdaBelief optimizer: adapting stepsizes by the belief in observed gradients. arXiv preprint arXiv:2010.07468.

Google Scholar

Ziatdinov, M., Kim, D., Neumayer, S., Vasudevan, R. K., Collins, L., Jesse, S., Ahmadi, M., et al. (2020). Imaging mechanism for hyperspectral scanning probe microscopy via Gaussian process modelling. npj Comput. Mater. 6, 21. doi: 10.1038/s41524-020-0289-6

CrossRef Full Text | Google Scholar

Zoph, B., and Le, Q. V. (2016). Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578.

PubMed Abstract | Google Scholar

Keywords: machine learning for science, big data, particle physics, codesign, coprocessors, heterogeneous computing, fast machine learning

Citation: Deiana AM, Tran N, Agar J, Blott M, Di Guglielmo G, Duarte J, Harris P, Hauck S, Liu M, Neubauer MS, Ngadiuba J, Ogrenci-Memik S, Pierini M, Aarrestad T, Bähr S, Becker J, Berthold A-S, Bonventre RJ, Müller Bravo TE, Diefenthaler M, Dong Z, Fritzsche N, Gholami A, Govorkova E, Guo D, Hazelwood KJ, Herwig C, Khan B, Kim S, Klijnsma T, Liu Y, Lo KH, Nguyen T, Pezzullo G, Rasoulinezhad S, Rivera RA, Scholberg K, Selig J, Sen S, Strukov D, Tang W, Thais S, Unger KL, Vilalta R, von Krosigk B, Wang S and Warburton TK (2022) Applications and Techniques for Fast Machine Learning in Science. Front. Big Data 5:787421. doi: 10.3389/fdata.2022.787421

Received: 30 September 2021; Accepted: 31 January 2022;
Published: 12 April 2022.

Edited by:

Elena Cuoco, European Gravitational Observatory, Italy

Reviewed by:

Andreea Anghel, IBM Research-Zurich, Switzerland
Rudy Raymond, IBM, Japan

Copyright © 2022 Deiana, Tran, Agar, Blott, Di Guglielmo, Duarte, Harris, Hauck, Liu, Neubauer, Ngadiuba, Ogrenci-Memik, Pierini, Aarrestad, Bähr, Becker, Berthold, Bonventre, Müller Bravo, Diefenthaler, Dong, Fritzsche, Gholami, Govorkova, Guo, Hazelwood, Herwig, Khan, Kim, Klijnsma, Liu, Lo, Nguyen, Pezzullo, Rasoulinezhad, Rivera, Scholberg, Selig, Sen, Strukov, Tang, Thais, Unger, Vilalta, von Krosigk, Wang and Warburton. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Allison McCarn Deiana, YWRlaWFuYSYjeDAwMDQwO3NtdS5lZHU=; Nhan Tran, bnRyYW4mI3gwMDA0MDtmbmFsLmdvdg==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.