We hear more and more about artificial intelligence (AI) without yet fully understanding biological intelligence (BI). It seems to be taken for granted that trained statistical inference algorithms such as those employed in machine learning, generative or large language models (LLMs) and the like are “intelligent” because they provide reasonably useful data-mined answers to wide ranging questions. But what is the essence of intelligence? And how should we compare AI and BI? The floating-point operation (FLOP) is a common digital computing measure, but we don’t have a good grasp of a BioFLOP definition. Spike or synaptic analyses often ignore information-rich subthreshold processing. Biological brains and their bodies evolved in challenging environments, so perhaps the energy efficiency of computation is a good starting point to anchor comparisons of AI versus BI. That brains are more energy efficient than computers is not disputed; estimates indicate by many orders of magnitude. This vast discrepancy may provide a key to understanding nervous systems, the very idea of AI, as well as energy-saving alternatives.
Can we better understand what brains are doing computationally by doing a cost analysis that reveals what kinds of information must be processed in biology or machines? Could this understanding in turn inform the design of more energy-efficient computing? And might such new insights then lead to revelations about the evolution of intelligence itself?
The keys to understanding the biological side will no doubt involve a deep dive into multi-scale information processing in neural networks, from molecules and ultrastructure to circuits and regions, as well as in single cells, which are known to make their own decisions Perhaps we can even integrate whole-organism biomechanics and metabolomics. Nature has provided us with many architectures across species to compare, including the super complex humans and cetaceans, small networks as in C. elegans or A. californica, or the truly odd cephalopods.
On the artificial side, delving into everything from microchip characteristics to machine learning strategies will be required. For example, can we achieve affordable AI with current microchip architectures? Is there an algorithmic approach comparing multi-purpose versus task-specific techniques or training versus inference aspects of machine learning that can leverage energy efficiency? Many creators of AI tools have recently lamented about the energy costs and environmental impacts of usage, and this is likely to grow into a significant problem fast, as user numbers outpace design improvements.
We call for papers that address the energy demands and relative efficiency of artificial and biological intelligence. Submissions may include primary experimental or computational research, reviews, as well as theory papers on relevant topics, ranging from speculations on the BioFLOP to the comparability of natural and machine intelligence. This compendium will hopefully not only motivate the development of more energy efficient computing methods or architectures, but also catalyze discussions about what exactly we mean by intelligence, its parameters, properties and underlying concepts. These papers should cover a range of species, biological scales, physical computing architectures and machine learning strategies and algorithms.
Topic Editor Stephen Larson is the co-founder of and employed by MetaCell LLC, LTD. The other Topic Editors declare no competing interests with regard to the Research Topic subject.
Keywords:
energy efficiency, Comparative Intelligence, Machine Learning Efficiency, Multi-Scale Analysis, BioFLOP
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
We hear more and more about artificial intelligence (AI) without yet fully understanding biological intelligence (BI). It seems to be taken for granted that trained statistical inference algorithms such as those employed in machine learning, generative or large language models (LLMs) and the like are “intelligent” because they provide reasonably useful data-mined answers to wide ranging questions. But what is the essence of intelligence? And how should we compare AI and BI? The floating-point operation (FLOP) is a common digital computing measure, but we don’t have a good grasp of a BioFLOP definition. Spike or synaptic analyses often ignore information-rich subthreshold processing. Biological brains and their bodies evolved in challenging environments, so perhaps the energy efficiency of computation is a good starting point to anchor comparisons of AI versus BI. That brains are more energy efficient than computers is not disputed; estimates indicate by many orders of magnitude. This vast discrepancy may provide a key to understanding nervous systems, the very idea of AI, as well as energy-saving alternatives.
Can we better understand what brains are doing computationally by doing a cost analysis that reveals what kinds of information must be processed in biology or machines? Could this understanding in turn inform the design of more energy-efficient computing? And might such new insights then lead to revelations about the evolution of intelligence itself?
The keys to understanding the biological side will no doubt involve a deep dive into multi-scale information processing in neural networks, from molecules and ultrastructure to circuits and regions, as well as in single cells, which are known to make their own decisions Perhaps we can even integrate whole-organism biomechanics and metabolomics. Nature has provided us with many architectures across species to compare, including the super complex humans and cetaceans, small networks as in C. elegans or A. californica, or the truly odd cephalopods.
On the artificial side, delving into everything from microchip characteristics to machine learning strategies will be required. For example, can we achieve affordable AI with current microchip architectures? Is there an algorithmic approach comparing multi-purpose versus task-specific techniques or training versus inference aspects of machine learning that can leverage energy efficiency? Many creators of AI tools have recently lamented about the energy costs and environmental impacts of usage, and this is likely to grow into a significant problem fast, as user numbers outpace design improvements.
We call for papers that address the energy demands and relative efficiency of artificial and biological intelligence. Submissions may include primary experimental or computational research, reviews, as well as theory papers on relevant topics, ranging from speculations on the BioFLOP to the comparability of natural and machine intelligence. This compendium will hopefully not only motivate the development of more energy efficient computing methods or architectures, but also catalyze discussions about what exactly we mean by intelligence, its parameters, properties and underlying concepts. These papers should cover a range of species, biological scales, physical computing architectures and machine learning strategies and algorithms.
Topic Editor Stephen Larson is the co-founder of and employed by MetaCell LLC, LTD. The other Topic Editors declare no competing interests with regard to the Research Topic subject.
Keywords:
energy efficiency, Comparative Intelligence, Machine Learning Efficiency, Multi-Scale Analysis, BioFLOP
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.