- 1Departamento de Ciencias de la Computación, Instituto de Investigaciones en Matemáticas Aplicadas y Sistemas, Universidad Nacional Autónoma de México, Mexico City, Mexico
- 2Centro de Ciencias de la Complejidad, Universidad Nacional Autónoma de México, Mexico City, Mexico
- 3Lakeside Labs GmbH, Klagenfurt am Wörthersee, Austria
There is no agreed definition of intelligence, so it is problematic to simply ask whether brains, swarms, computers, or other systems are intelligent or not. To compare the potential intelligence exhibited by different cognitive systems, I use the common approach used by artificial intelligence and artificial life: Instead of studying the substrate of systems, let us focus on their organization. This organization can be measured with information. Thus, I apply an informationist epistemology to describe cognitive systems, including brains and computers. This allows me to frame the usefulness and limitations of the brain-computer analogy in different contexts. I also use this perspective to discuss the evolution and ecology of intelligence.
1. Introduction
In the 1850s, an English newspaper described the growing global telegraph network as a “nervous system of the planet” (Gleick, 2011). Notice that this was half a century before Ramón y Cajal (1899) first published his studies on neurons. Still, metaphors have been used since antiquity to describe and try to understand our bodies and our minds (Zarkadakis, 2015; Epstein, 2016): humans have been described as made of clay (Middle East) or corn (Americas), with flowing humors, like clockwork automata, similar to industrial factories, etc. The most common metaphor in cognitive sciences has been that of describing brains as computers (von Neumann, 1958; Davis, 2021).
Metaphors have been used in a broad range of disciplines. For example, in urbanism, there are arguments in favor of changing the dominant narrative of “cities as machines” to “cities as organisms” (Batty, 2012; Gershenson, 2013b).
We can have a plethora of discussions on which metaphors are the best. Still, being pragmatic, we can judge metaphors in terms of their usefulness: if they help us understand phenomena or build systems, then they are valuable. Notice that then, depending on the context, different metaphors can be useful for different purposes (Gershenson, 2004). For example, in the 1980s, the debate between symbolists/representationists (brain as processing symbols) (Fodor and Pylyshyn, 1988) and connectionists (brain as network of simple units) (Smolensky, 1988) did not end with a “winner” and a “loser,” as both metaphors (computational, by the way) are useful in different contexts.
There have been several other metaphors used to describe cognition, minds, and brains, each with their advantages and disadvantages (Varela et al., 1991; Steels and Brooks, 1995; Clark and Chalmers, 1998; Beer, 2000; Gärdenfors, 2000; Garnier et al., 2007; Chemero, 2009; Froese and Ziemke, 2009; Kiverstein and Clark, 2009; Froese and Stewart, 2010; Stewart et al., 2010; Downing, 2015; Harvey, 2019). It is not my purpose to discuss these here, but to notice that there is a rich variety of flavors when it comes to studying cognition. Nevertheless, all of these metaphors can be described in terms of information processing. Since computation can be understood as the transformation of information (Gershenson, 2012), “computers,” broadly understood as machines that process information can be a useful metaphor to contain and compare other metaphors. Note that the concept of “machine” (and thus computer) could also be updated (Bongard and Levin, 2021).
Formally, computation was defined by Turing (1937). A computable function is that which can be calculated by a Universal Turing Machine (UTM). Still, there are two main limitations of UTMs related to modeling minds (Gershenson, 2011a):
1. UTMs are closed. Once a computation begins, there is no change in the program or data, so adaptation during computation is limited.
2. UTMs compute only once they halt. In other words, outputs depend on a UTM “finishing its computation.” Still, minds seem to be more continuous than halting. Then the question arises: what function would a mind be computing?
As many have noted, the continuous nature of cognition seems to be closely related to that of the living (Maturana and Varela, 1980; Hopfield, 1994; Stewart, 1995; Walker, 2014). We have previously studied the “living as information processing” (Farnsworth et al., 2013), not only at the organism level, but at all relevant scales. Thus, it is natural to use a similar approach to describe intelligence.
Note that the limitations of UTMs apply only for theoretical computation. In practice, many artificial computation systems are continuous, such as reactive systems. An example would be an operating system, that does not precisely halt, but is always expecting events (internal or external) and responding to these.
In the next section, I present a general notion of information and its limits to study intelligence. Then, I present the advantages of studying intelligence in terms of information processing. Intelligence is not restricted to brains, and swarms are a classic example of this, which can also be described as information processing systems. Before concluding, I exploit the metaphor of “intelligence as information processing” to understand its evolution and ecology.
2. Information
Shannon (1948) proposed a measure of information in the context of telecommunications, that is equivalent to Boltzmann-Gibbs entropy. This measure characterizes how much a receiver “learns” from incoming symbols (usually bits) of a string, based on the probability distribution of previously known/received symbols: if new bits can be completely determined from the past (as in a string with only one repeating symbol), then they carry zero information (because we know that the new symbols will be the same as previous ones). If previous information is useless to predict the next bit (as in a random coin toss), then the bit will carry maximum information. Elaborating on this, Shannon calculated how much redundancy is required to reliably transmit a message over an unreliable (noisy) channel. Even when Shannon's purpose was very specific, the use of information in various disciplines has exploded in recent decades (Haken, 1988; Lehn, 1990; Wheeler, 1990; Gell-Mann and Lloyd, 1996; Atlan and Cohen, 1998; DeCanio and Watkins, 1998; Roederer, 2005; von Baeyer, 2005; Cover and Thomas, 2006; Prokopenko et al., 2009, 2011; Batty et al., 2012; Escalona-Morán et al., 2012; Gershenson, 2012, 2020, 2021b; Fernández et al., 2014, 2017; Zubillaga et al., 2014; Haken and Portugali, 2015; Hidalgo, 2015; Murcio et al., 2015; Amoretti and Gershenson, 2016; Roli et al., 2018; Equihua et al., 2020; Krakauer et al., 2020; Scharf, 2021).
We can say that electronic computers process information explicitly, as we can analyze each change of state and information is encoded in a precise physical location. However, humans and other animals process information implicitly. For example, we say we have memories, but these are not physically at a specific location. And it seems unfeasible to represent precisely the how information changes in our brains. Still, we do process information, as we can describe “inputs” (perceptions) and “outputs” (actions).
Shannon assumed that the meaning of a message was agreed previously between emitter and receiver. This was no major problem for telecommunications. However, in other contexts, meaning is not a trivial matter. Following Wittgenstein (1999), we can say that the meaning of information is given by the use agents make of it. This has several implications. One is that we can change meaning without changing information [passive information transformation; (Gershenson, 2012)]. Another is the limits on artificial intelligence (Searle, 1980; Mitchell, 2019), as the use of information in artificial systems tends to be predefined. Algorithms can “recognize” traffic lights or cats in an image, as they are trained for this specific purpose. But the “meaning” for computer programs is predefined, i.e., what we want the program to do. The quest for an “artificial general intelligence” that would go beyond this limit has produced not much more than speculations.
Even if we could simulate in a digital computer all the neurons, molecules, or even elementary particles from a brain, such a simulation would not yield something akin to a mind. On the one hand, interactions generate novel information at multiple scales, so we would need to include not only brain, but body and world that interacts with the brain (Clark, 1997). Moreover, such a simulation would require to model not only one scale, but all scales relevant to minds (see below). On the other hand, as mentioned above, observers can give different meanings to the same information. In other words, the same “brain state” for different people could refer to different “mental states.” For example, we could use the same simple “neural” architecture of a Braitenberg vehicle (Braitenberg, 1986) that exhibits phototaxis, but connect the inputs to different sensors (e.g., sound or odor, instead of light), and the “meaning” of the information processed by the same neural architecture would be very different. In a sense, this is related to the failure of Laplace's daemon: even with full information of the states of the components of a system, prediction is limited because interactions generate novel information (Gershenson, 2013a). And this novel information can determine the future production of information at different scales through upward or downward causation (Campbell, 1974; Bitbol, 2012; Farnsworth et al., 2017; Flack, 2017), so all relevant scales should be considered (Gershenson, 2021a). An example of downward causation can be given with money: it is a social contract, but has a causal effect on matter and energy (physics), e.g., when we extract minerals from a mountain. This action does not violate the laws of physics, but the laws of physics are not enough to predict that the matter in the mountain will be extracted by humans for their own purposes.
In spite of all its limitations, the computer metaphor can be useful in a particular way. First, the limits on prediction by interactions are related to computational irreducibility (Wolfram, 2002). Second, describing brains and minds in terms of information allows us to avoid dualisms. Thus, it becomes natural to use information processing to describe intelligence and its evolution. Finally, information can contain other metaphors and formalisms, so it can be used to compare them and also to exploit their benefits.
3. Intelligence
There are several definitions of intelligence, but not a single one that is agreed upon. We have similar situations with the definitions of life (De Duve, 2003; Aguilar et al., 2014), consciousness (Michel et al., 2019), complexity (Lloyd, 2001; Heylighen et al., 2007), emergence (Bedau and Humphreys, 2008), and more. These concepts could be said to be of the type “I know it when I see it,” to quote Potter Stewart.
Still, having no agreed definition is no motive nor excuse for not studying a phenomenon. Moreover, having different definitions for the same phenomenon can give us broader insights than if we stick to a single, narrow, inflexible definition.
Thus, we could define intelligence as “the art of getting away with it” (Arturo Frappé), or “the ability to hold two opposed ideas in mind at the same time and still retain the ability to function. One should, for example, be able to see that things are hopeless and yet be determined to make them otherwise” (F. Scott Fitzgerald). Turing (1950) proposed his famous test to decide whether a machine was intelligent. Generalizing Turing's test, Mario Lagunez suggested that in order to decide whether a system was intelligent, first, the system has to perform an action. Then, an observer has to judge whether the action was intelligent or not, according to some criteria. In this sense, there is no intrinsically intelligent behavior. All actions and decisions are contextual (Gershenson, 2002). Like with meaning, the same action can be intelligent or not, depending on the context and on the judge and their expectations.
Generalizing, we can define intelligence in terms of information processing: An agent a can be described as intelligent if it transforms information [individual (internal) or environmental (external)] to increase its “satisfaction” σ.
I have previously defined satisfaction σ ∈ [0, 1] as the degree to which the goals of an agent have been fulfilled (Gershenson, 2007, 2011b). Certainly, we still require an observer, since we are the ones who define the goals of an agent, its boundaries, its scale, and thus, its satisfaction. Examples of goals are sustainability, survival, happiness, power, control, and understanding. All of these can be described as information propagation (Gershenson, 2012): In this context, an intelligent agent will propagate its own information.
Brains by themselves cannot propagate. But species of animals with brains tend to propagate. In this context, brains are parts of agents that help process information in order to propagate those agents. From this abstract perspective, we can see that such ability is not restricted to brains (Levin and Dennett, 2020). Thus, there are other mechanisms capable of producing intelligent behavior.
4. Swarms
There has been much work related to collective intelligence and cognition (Hutchins, 1995; Heylighen, 1999; Reznikova, 2007; Couzin, 2009; Malone and Bernstein, 2015; Solé et al., 2016). Interestingly, groups of humans, animals or machines do not have a single brain. Thus, information processing is distributed.
A particular case is that of insect swarms (Chialvo and Millonas, 1995; Garnier et al., 2007; Passino et al., 2008; Marshall et al., 2009; Trianni and Tuci, 2009; Martin and Reggia, 2010), where not only information processing is distributed, but also reproduction and selection occur at the colony level (Hölldobler and Wilson, 2008).
To compare the cognitive architectures of brains and swarms, I previously proposed computing networks (Gershenson, 2010). With this formalism, it can be shown that the differences in substrate do not necessarily imply a theoretical difference in cognitive abilities. Nevertheless, in practice, the speed and scalability of information processing of brains is much superior than that of swarms: neurons can interact in the scale of milliseconds, and mammal brains can have a number of neurons in the order of 1011 with 1014 synapses (several species have more neurons than humans, including elephants and some whales, orcas having the most and more than twice as humans). The largest insect swarms that have been registered (locusts) are also in the order of 1011 individuals (covering 200Km2). However, insects interact in the scale of seconds, and only with their local neighbors. In theory, it might not matter much. But in practice, this limits considerably the information processing capacities of swarms over brains.
Thus, the brain as computer metaphor is not appropriate for studying collective intelligence in general, nor swarm intelligence in particular. However, the intelligence of brains and swarms can be described in terms of information processing, as an agent a can be an organism or a colony, with its own satisfaction σ defined by an external observer.
Another advantage of studying intelligence as information processing is that we can use the same formalism to study intelligence at multiple scales: cellular, multicellular, collective/social, and cultural. Curiously, at the global scale (where we might reach a scale of 1011 humans later this century), the brain metaphor has also been used (Mayer-Kress and Barczys, 1995; Börner et al., 2005; Bernstein et al., 2012), although its usefulness remains to be demonstrated.
5. Evolution and Ecology
If we want to have a better understanding of intelligence, we must study how it came to evolve. Intelligence as information-processing can also be useful in this context, as different substrates and mechanisms can be used to exhibit intelligent behavior.
What could be the ecological pressures that promote the evolution of intelligence? Since environments and ecosystems can also be described in terms of information, we can say that more complex environments will promote—through natural selection—more complex organisms and species, which will require a more complex intelligence to process the information of their environment and of other organisms and species they interact with (Gershenson, 2012). In this way, the complexity of ecosystems can also be expected to increase though evolution. It should be noted that we understand complexity as a balance between order and chaos, stability and change (Packard, 1988; Langton, 1990; Lopez-Ruiz et al., 1995; Fernández et al., 2014; Roli et al., 2018). Thus, species cannot be too robust or too adaptable in order to thrive in a complex ecosystem. This certainly will depend on how stable or volatile the ecosystems will be Equihua et al. (2020), but it is clear that organisms require to match the variety that their environment poses (Ashby, 1956; Gershenson, 2015) (see below).
These ideas generalize Dunbar's (1993, 2003) “social brain hypothesis”: larger and more complex social groups put a selective pressure on more complex information processing (measured as the neocortex to bodymass ratio), which gives individuals more cognitive capacities to recognize different individuals, remember who can they trust, multiple levels of intentionality (Dennett, 1989), and so on. In turn, increased cognitive abilities lead to more complex groups, so this cycle reinforces the selection for more intelligent individuals.
One can make a similar argument using environments instead of social groups: more complex ecosystems put a selective pressure for more intelligent organisms, social groups, and species; as they require greater information-processing capabilities to survive and exploit their environments. This also creates a feedback, where more complex information processing by organisms, groups, and species produce more complex ecosystems.
However, individuals can “offload” their information processing to their group or environment, leading to a decrease in their individual information processing abilities (Reséndiz-Benhumea et al., 2021). This is to say that intelligence does not always increase. Although there is a selective pressure for intelligence, its cost imposes limits that depend as well on the usefulness of increased cognitive abilities.
Generalizing, we can say that information evolves to have greater control over its own production (Gershenson, 2012). This leads to more complex information-processing, and thus, we can expect intelligence to increase at multiple scales through evolution, independently on the substrates that actually do the information processing.
Another way of describing the same: information is transformed by different causes. This generates a variety of complexity (Ashby, 1956; Gershenson, 2015). More complex information requires more complex agents to propagate it, leading to an increase of complexity and intelligence through evolution.
At different scales, since the Big Bang, we have seen an increase of information processing through evolution. In recent decades, this increase has been supraexponential in computers (Schaller, 1997). Although there are limitations for sustaining this rate of increase (Shalf, 2020), we can say that the increase of intelligence is a natural tendency of evolution, be it of brains, swarms, or machines. This will not lead to a “singularity,” but to an increase of the intelligence and complexity of humans, machines, and the ecosystems we create.
6. Conclusion
Brains are not essential for intelligence. Plants, swarms, bacterial colonies, robots, societies, and more exhibit intelligence without brains. An understanding of intelligence (and life, Gershenson et al., 2020) independently of its substrate, in terms of information processing, will be more illuminating that focussing only on the mechanisms used by vertebrates and other animals. In this sense, the metaphor of the brain as a computer, is limited more on the side of the brain than on the side of the computer. Brains do process information to exhibit intelligence, but there are several other mechanisms that also process information to exhibit intelligence. Brains are just a particular case, and we can learn a lot from them, but we will learn more if we do not limit our studies to their particular type of cognition.
Data Availability Statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
Author Contributions
CG conceived and wrote the paper.
Funding
This work was supported by UNAM's PAPIIT IN107919 and IV100120 grants.
Conflict of Interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's Note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Aguilar, W., Santamaría Bonfil, G., Froese, T., and Gershenson, C. (2014). The past, present, and future of artificial life. Front. Robot. AI 1:8. doi: 10.3389/frobt.2014.00008
Amoretti, M., and Gershenson, C. (2016). Measuring the complexity of adaptive peer-to-peer systems. Peer-to-Peer Netw. Appl. 9, 1031–1046. doi: 10.1007/s12083-015-0385-4
Ashby, W. R. (1956). An Introduction to Cybernetics. London: Chapman & Hall. doi: 10.5962/bhl.title.5851
Atlan, H., and Cohen, I. R. (1998). Immune information, self-organization and meaning. Int. Immunol. 10, 711–717. doi: 10.1093/intimm/10.6.711
Batty, M. (2012). Building a science of cities. Cities 29, S9–S16. doi: 10.1016/j.cities.2011.11.008
Batty, M., Morphet, R., Massuci, P., and Stanilov, K. (2012). “Entropy, complexity and spatial information,” in CASA Working Paper, 185. London, UK.
Bedau, M. A., and Humphreys, P. (eds.). (2008). Emergence: Contemporary Readings in Philosophy and Science. Cambridge, MA: MIT Press. doi: 10.7551/mitpress/9780262026215.001.0001
Beer, R. D. (2000). Dynamical approaches to cognitive science. Trends Cogn. Sci. 4, 91–99. doi: 10.1016/S1364-6613(99)01440-0
Bernstein, A., Klein, M., and Malone, T. W. (2012). Programming the global brain. Commun. ACM 55, 41–43. doi: 10.1145/2160718.2160731
Bitbol, M. (2012). Downward causation without foundations. Synthese 185, 233–255. doi: 10.1007/s11229-010-9723-5
Bongard, J., and Levin, M. (2021). Living things are not (20th century) machines: updating mechanism metaphors in light of the modern science of machine behavior. Front. Ecol. Evol. 9:147. doi: 10.3389/fevo.2021.650726
Börner, K., Dall'Asta, L., Ke, W., and Vespignani, A. (2005). Studying the emerging global brain: analyzing and visualizing the impact of co-authorship teams. Complexity 10, 57–67. doi: 10.1002/cplx.20078
Campbell, D. T. (1974). “‘Downward causation’ in hierarchically organized biological systems,” in Studies in the Philosophy of Biology, eds F. J. Ayala and T. Dobzhansky (New York City, NY: Macmillan), 179–186. doi: 10.1007/978-1-349-01892-5_11
Chemero, A. (2009). Radical Embodied Cognitive Science. Cambridge, MA: The MIT Press. doi: 10.7551/mitpress/8367.001.0001
Chialvo, D., and Millonas, M. (1995). “How swarms build cognitive maps,” in The Biology and Technology of Intelligent Autonomous Agents, Vol. 144, ed L. Steels (Berlin; Heidelberg: Springer), 439–450. doi: 10.1007/978-3-642-79629-6_20
Clark, A. (1997). Being There: Putting Brain, Body, and World Together Again. Cambridge, MA: MIT Press. doi: 10.7551/mitpress/1552.001.0001
Clark, A., and Chalmers, D. (1998). The extended mind. Analysis 58, 7–19. doi: 10.1093/analys/58.1.7
Couzin, I. D. (2009). Collective cognition in animal groups. Trends Cogn. Sci. 13, 36–43. doi: 10.1016/j.tics.2008.10.002
Cover, T. M., and Thomas, J. A. (2006). Elements of Information Theory. Hoboken, NJ: Wiley-Interscience.
Davis, M. (2021). The brain-as-computer metaphor. Front. Comput. Sci. 3:41. doi: 10.3389/fcomp.2021.681416
DeCanio, S. J., and Watkins, W. E. (1998). Information processing and organizational structure. J. Econ. Behav. Organ. 36, 275–294. doi: 10.1016/S0167-2681(98)00096-1
Dennett, D. C. (1989). The Intentional Stance. Cambridge, MA: MIT Press. doi: 10.1017/S0140525X00058611
Downing, K. L. (2015). Intelligence Emerging: Adaptivity and Search in Evolving Neural Systems. Cambridge, MA: MIT Press. doi: 10.7551/mitpress/9898.001.0001
Dunbar, R. I. M. (1993). Coevolution of neocortical size, group size and language in humans. Behav. Brain Sci. 16, 681–735. doi: 10.1017/S0140525X00032325
Dunbar, R. I. M. (2003). The social brain: mind, language and society in evolutionary perspective. Ann. Rev. Anthrop. 32, 163–181. doi: 10.1146/annurev.anthro.32.061002.093158
Equihua, M., Espinosa Aldama, M., Gershenson, C., López-Corona, O., Munguía, M., Pérez-Maqueo, O., and Ramírez-Carrillo, E. (2020). Ecosystem antifragility: beyond integrity and resilience. PeerJ 8:e8533. doi: 10.7717/peerj.8533
Escalona-Morán, M., Paredes, G., and Cosenza, M. G. (2012). Complexity, information transfer and collective behavior in chaotic dynamical networks. Int. J. Appl. Math. Stat. 26, 58–66. Available online at: https://arxiv.org/abs/1010.4810
Farnsworth, K. D., Ellis, G. F. R., and Jaeger, L. (2017). “Living through downward causation: from molecules to ecosystems,” in From Matter to Life: Information and Causality, eds S. I. Walker, P. C. W. Davies, and G. F. R. Ellis (Cambridge, UK: Cambridge University Press), 303–333.
Farnsworth, K. D., Nelson, J., and Gershenson, C. (2013). Living is information processing: from molecules to global systems. Acta Biotheor. 61, 203–222. doi: 10.1007/s10441-013-9179-3
Fernández, N., Aguilar, J., Pi na-García, C. A., and Gershenson, C. (2017). Complexity of lakes in a latitudinal gradient. Ecol. Complex. 31, 1–20. doi: 10.1016/j.ecocom.2017.02.002
Fernández, N., Maldonado, C., and Gershenson, C. (2014). “Information measures of complexity, emergence, self-organization, homeostasis, and autopoiesis,” in Guided Self-Organization: Inception, Vol. 9 of Emergence, Complexity and Computation, ed M. Prokopenko (Berlin; Heidelberg: Springer), 19–51. doi: 10.1007/978-3-642-53734-9_2
Flack, J. C. (2017). Coarse-graining as a downward causation mechanism. Philos. Trans. R. Soc. A 375:20160338. doi: 10.1098/rsta.2016.0338
Fodor, J. A., and Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: a critical analysis. Cognition 28, 3–71. doi: 10.1016/0010-0277(88)90031-5
Froese, T., and Stewart, J. (2010). Life after ashby: ultrastability and the autopoietic foundations of biological autonomy. Cybern. Hum. Know. 17, 7–50. doi: 10.1007/s10699-010-9222-7
Froese, T., and Ziemke, T. (2009). Enactive artificial intelligence: investigating the systemic organization of life and mind. Artif. Intell. 173, 366–500. doi: 10.1016/j.artint.2008.12.001
Gärdenfors, P. (2000). Conceptual Spaces: The Geometry of Thought. Cambridge, MA: MIT Press; Bradford Books. doi: 10.7551/mitpress/2076.001.0001
Garnier, S., Gautrais, J., and Theraulaz, G. (2007). The biological principles of swarm intelligence. Swarm Intell. 1, 3–31. doi: 10.1007/s11721-007-0004-y
Gell-Mann, M., and Lloyd, S. (1996). Information measures, effective complexity, and total information. Complexity 2, 44–52. doi: 10.1002/(SICI)1099-0526(199609/10)2:1<44::AID-CPLX10>3.0.CO;2-X
Gershenson, C. (2002). Contextuality: A Philosophical Paradigm, With Applications to Philosophy of Cognitive Science. POCS Essay, COGS, University of Sussex.
Gershenson, C. (2004). Cognitive paradigms: which one is the best? Cogn. Syst. Res. 5, 135–156. doi: 10.1016/j.cogsys.2003.10.002
Gershenson, C. (2007). Design and Control of Self-organizing Systems. Mexico: CopIt Arxives. Available online at: http://tinyurl.com/DCSOS2007
Gershenson, C. (2010). Computing networks: a general framework to contrast neural and swarm cognitions. Paladyn J. Behav. Robot. 1, 147–153. doi: 10.2478/s13230-010-0015-z
Gershenson, C. (2011a). Are Minds Computable? Technical Report 2011.08, Centro de Ciencias de la Complejidad. https://arxiv.org/abs/1110.3002
Gershenson, C. (2011b). The sigma profile: a formal tool to study organization and its evolution at multiple scales. Complexity 16, 37–44. doi: 10.1002/cplx.20350
Gershenson, C. (2012). “The world as evolving information,” in Unifying Themes in Complex Systems, Vol. VII, eds A. Minai, D. Braha, and Y. Bar-Yam (Berlin; Heidelberg: Springer), 100–115. doi: 10.1007/978-3-642-18003-3_10
Gershenson, C. (2013a). The implications of interactions for science and philosophy. Found. Sci. 18, 781–790. doi: 10.1007/s10699-012-9305-8
Gershenson, C. (2015). Requisite variety, autopoiesis, and self-organization. Kybernetes 44, 866–873. doi: 10.1108/K-01-2015-0001
Gershenson, C. (2020). “Information in science and Buddhist philosophy: towards a non-materialistic worldview,” in Vajrayana Buddhism in Russia: Topical Issues of History and Sociocultural Analytics, eds A. M. Alekseyev-Apraksin and V. M. Dronova (Moscow: Almazny Put), 210–218.
Gershenson, C. (2021b). “On the scales of selves: information, life, and buddhist philosophy,” in ALIFE 2021: The 2021 Conference on Artificial Life, eds J. Čejková, S. Holler, L. Soros, and O. Witkowski (Prague: MIT Press), 2. doi: 10.1162/isal_a_00402
Gershenson, C., Trianni, V., Werfel, J., and Sayama, H. (2020). Self-organization and artificial life. Artif. Life 26, 391–408. doi: 10.1162/artl_a_00324
Haken, H. (1988). Information and Self-organization: A Macroscopic Approach to Complex Systems. Berlin: Springer-Verlag. doi: 10.1007/978-3-662-07893-8
Haken, H., and Portugali, J. (2015). Information Adaptation: The Interplay Between Shannon Information and Semantic Information in Cognition, Volume XII of SpringerBriefs in Complexity. Cham; Heidelberg; New York, NY; Dordrecht; London: Springer. doi: 10.1007/978-3-319-11170-4
Harvey, I. (2019). Neurath's boat and the sally-anne test: life, cognition, matter and stuff. Adapt. Behav. 1059712319856882. doi: 10.1177/1059712319856882
Heylighen, F. (1999). Collective intelligence and its implementation on the web. Comput. Math. Theory Organ. 5, 253–280. doi: 10.1023/A:1009690407292
Heylighen, F., Cilliers, P., and Gershenson, C. (2007). “Complexity and philosophy,” in Complexity, Science and Society, eds J. Bogg and R. Geyer (Oxford: Radcliffe Publishing), 117–134.
Hidalgo, C. A. (2015). Why Information Grows: The Evolution of Order, From Atoms to Economies. New York, NY: Basic Books.
Hölldobler, B., and Wilson, E. O. (2008). The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies. New York, NY: W. W. Norton & Company.
Hopfield, J. J. (1994). Physics, computation, and why biology looks so different. J. Theor. Biol. 171, 53–60. doi: 10.1006/jtbi.1994.1211
Hutchins, E. (1995). Cognition in the Wild. Cambridge, MA: MIT Press. doi: 10.7551/mitpress/1881.001.0001
Kiverstein, J., and Clark, A. (2009). Introduction: mind embodied, embedded, enacted: one church or many? Topoi 28, 1–7. doi: 10.1007/s11245-008-9041-4
Krakauer, D., Bertschinger, N., Olbrich, E., Flack, J. C., and Ay, N. (2020). The information theory of individuality. Theory Biosci. 139, 209–223. doi: 10.1007/s12064-020-00313-7
Langton, C. G. (1990). Computation at the edge of chaos: phase transitions and emergent computation. Phys. D 42, 12–37. doi: 10.1016/0167-2789(90)90064-V
Lehn, J.-M. (1990). Perspectives in supramolecular chemistry–from molecular recognition towards molecular information processing and self-organization. Angew. Chem. Int. Edn. Engl. 29, 1304–1319. doi: 10.1002/anie.199013041
Lloyd, S. (2001). Measures of Complexity: A Non-Exhaustive List. Department of Mechanical Engineering, Massachusetts Institute of Technology.
Lopez-Ruiz, R., Mancini, H. L., and Calbet, X. (1995). A statistical measure of complexity. Phys. Lett. A 209, 321–326. doi: 10.1016/0375-9601(95)00867-5
Malone, T. W., and Bernstein, M. S., editors (2015). Handbook of Collective Intelligence. Cambridge, MA: MIT Press.
Marshall, J. A., Bogacz, R., Dornhaus, A., Planqué, R., Kovacs, T., and Franks, N. R. (2009). On optimal decision-making in brains and social insect colonies. J. R. Soc. Interface. 6:1065–1074. doi: 10.1098/rsif.2008.0511
Martin, C., and Reggia, J. (2010). Self-assembly of neural networks viewed as swarm intelligence. Swarm Intell. 4, 1–36. doi: 10.1007/s11721-009-0035-7
Maturana, H., and Varela, F. (1980). Autopoiesis and Cognition: The Realization of Living. Dordrecht: Reidel Publishing Company. doi: 10.1007/978-94-009-8947-4
Mayer-Kress, G., and Barczys, C. (1995). The global brain as an emergent structure from the worldwide computing network, and its implications for modeling. Inform. Soc. 11, 1–27. doi: 10.1080/01972243.1995.9960177
Michel, M., Beck, D., Block, N., Blumenfeld, H., Brown, R., Carmel, D., et al. (2019). Opportunities and challenges for a maturing science of consciousness. Nat. Hum. Behav. 3, 104–107. doi: 10.1038/s41562-019-0531-8
Murcio, R., Morphet, R., Gershenson, C., and Batty, M. (2015). Urban transfer entropy across scales. PLoS ONE 10:e0133780. doi: 10.1371/journal.pone.0133780
Packard, N. H. (1988). “Adaptation toward the edge of chaos,” in Dynamic Patterns in Complex Systems, eds J. A. S. Kelso, A. J. Mandell, and M. F. Shlesinger (Singapore: World Scientific), 293–301.
Passino, K. M., Seeley, T. D., and Visscher, P. K. (2008). Swarm cognition in honey bees. Behav. Ecol. Sociobiol. 62, 401–414. doi: 10.1007/s00265-007-0468-1
Prokopenko, M., Boschetti, F., and Ryan, A. J. (2009). An information-theoretic primer on complexity, self-organisation and emergence. Complexity 15, 11–28. doi: 10.1002/cplx.20249
Prokopenko, M., Lizier, J. T., Obst, O., and Wang, X. R. (2011). Relating fisher information to order parameters. Phys. Rev. E 84:041116. doi: 10.1103/PhysRevE.84.041116
Ramón y Cajal, S. (1899). Textura del Sistema Nervioso del Hombre y de los Vertebrados: Estudios Sobre el Plan Estructural y Composición Histológica de los Centros Nerviosos Adicionados de Consideraciones Fisiológicas Fundadas en los Nuevos Descubrimientos, Vol. 1. Madrid: Moya.
Reséndiz-Benhumea, G. M., Sangati, E., Sangati, F., Keshmiri, S., and Froese, T. (2021). Shrunken social brains? A minimal model of the role of social interaction in neural complexity. Front. Neurorobot. 15:72. doi: 10.3389/fnbot.2021.634085
Reznikova, Z. (2007). Animal Intelligence From Individual to Social Cognition. Cambridge, UK: Cambridge University Press.
Roederer, J. G. (2005). Information and its Role in Nature. Heidelberg: Springer-Verlag. doi: 10.1007/3-540-27698-X
Roli, A., Villani, M., Filisetti, A., and Serra, R. (2018). Dynamical criticality: overview and open questions. J. Syst. Sci. Complex. 31, 647–663. doi: 10.1007/s11424-017-6117-5
Schaller, R. (1997). Moore's law: past, present and future. IEEE Spectr. 34, 52–59. doi: 10.1109/6.591665
Scharf, C. (2021). The Ascent of Information: Books, Bits, Genes, Machines, and Life's Unending Algorithm. New York, NY: Riverhead Books.
Searle, J. R. (1980). Minds, brains, and programs. Behav. Brain Sci. 3, 417–424. doi: 10.1017/S0140525X00005756
Shalf, J. (2020). The future of computing beyond Moore's law. Philos. Trans. R. Soc. A Math 378:20190061. doi: 10.1098/rsta.2019.0061
Shannon, C. E. (1948). A mathematical theory of communication. Bell Syst. Techn. J. 27, 379–423; 623–656. doi: 10.1002/j.1538-7305.1948.tb00917.x
Smolensky, P. (1988). On the proper treatment of connectionism. Behav. Brain Sci. 11, 1–23. doi: 10.1017/S0140525X00052432
Solé, R., Amor, D. R., Duran-Nebreda, S., Conde-Pueyo, N., Carbonell-Ballestero, M., and Monta nez, R. (2016). Synthetic collective intelligence. Biosystems 148, 47–61. doi: 10.1016/j.biosystems.2016.01.002
Steels, L., and Brooks, R. (1995). The Artificial Life Route to Artificial Intelligence: Building Embodied, Situated Agents. New York City, NY: Lawrence Erlbaum Associates.
Stewart, J. (1995). Cognition = life : Implications for higher-level cognition. Behav. Process. 35, 311–326. doi: 10.1016/0376-6357(95)00046-1
Stewart, J., Gapenne, O., and Di Paolo, E. A. (eds.). (2010). Enaction: Toward a New Paradigm for Cognitive Science. Cambridge, MA: MIT Press. doi: 10.7551/mitpress/9780262014601.001.0001
Trianni, V., and Tuci, E. (2009). “Swarm cognition and artificial life,” in Advances in Artificial Life. Proceedings of the 10th European Conference on Artificial Life (ECAL 2009). Hungary.
Turing, A. M. (1937). On computable numbers, with an application to the entscheidungs problem. Proc. Lond. Math. Soc. s2–42, 230–265. doi: 10.1112/plms/s2-42.1.230
Turing, A. M. (1950). Computing machinery and intelligence. Mind 59, 433–460. doi: 10.1093/mind/LIX.236.433
Varela, F. J., Thompson, E., and Rosch, E. (1991). The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: MIT Press. doi: 10.7551/mitpress/6730.001.0001
von Baeyer, H. C. (2005). Information: The New Language of Science. Cambridge, MA: Harvard University Press.
Walker, S. I. (2014). Top-down causation and the rise of information in the emergence of life. Information 5, 424–439. doi: 10.3390/info5030424
Wheeler, J. A. (1990). “Chapter 19: Information, physics, quantum: the search for links,” in Complexity, Entropy, and the Physics of Information, volume VIII of Santa Fe Institute Studies in the Sciences of Complexity, ed W. H. Zurek (Reading, MA: Perseus Books), 309–336.
Wittgenstein, L. (1999). Philosophical Investigations, 3rd Edn. Upper Saddle River, NJ: Prentice Hall.
Zarkadakis, G. (2015). In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence. Pegasus Books.
Keywords: mind, cognition, intelligence, information, brain, computer, swarm
Citation: Gershenson C (2021) Intelligence as Information Processing: Brains, Swarms, and Computers. Front. Ecol. Evol. 9:755981. doi: 10.3389/fevo.2021.755981
Received: 09 August 2021; Accepted: 22 September 2021;
Published: 18 October 2021.
Edited by:
Giorgio Matassi, FRE3498 Ecologie et Dynamique des Systèmes Anthropisés (EDYSAN), FranceReviewed by:
Thilo Gross, Helmholtz Institute for Functional Marine Biodiversity (HIFMB), GermanyAlberto Policriti, University of Udine, Italy
Copyright © 2021 Gershenson. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Carlos Gershenson, cgg@unam.mx