Skip to main content

REVIEW article

Front. Psychol., 17 August 2023
Sec. Cognitive Science
This article is part of the Research Topic Insights in Cognitive Science View all 8 articles

Intelligence across humans and machines: a joint perspective

\r\nTiago Buatim Nion Da Silveira,
Tiago Buatim Nion Da Silveira1,2*Heitor Silvrio Lopes&#x;Heitor Silvério Lopes1
  • 1Computational Intelligence Laboratory (LABIC), Federal University of Technology – Paraná, Curitiba, Brazil
  • 2Polytechnic School, University of Vale do Itajaí, Itajaí, Brazil

This paper aims to address the divergences and contradictions in the definition of intelligence across different areas of knowledge, particularly in computational intelligence and psychology, where the concept is of significant interest. Despite the differences in motivation and approach, both fields have contributed to the rise of cognitive science. However, the lack of a standardized definition, empirical evidence, or measurement strategy for intelligence is a hindrance to cross-fertilization between these areas, particularly for semantic-based applications. This paper seeks to equalize the definitions of intelligence from the perspectives of computational intelligence and psychology, and offer an overview of the methods used to measure intelligence. We argue that there is no consensus for intelligence, and the term is interchangeably used with similar, opposed, or even contradictory definitions in many fields. This paper concludes with a summary of its central considerations and contributions, where we state intelligence is an agent's ability to process external and internal information to find an optimum adaptation (decision-making) to the environment according to its ontology and then decode this information as an output action.

1. Introduction

On the one hand, divergences and contradictions in the definition of concepts between areas of knowledge can be ignored or minimized when such areas have little or no relationship (e.g., structure in civil engineering carries a different meaning of structure in psychology). On the other hand, we must be rather careful when a given concept is used in areas of distinct epistemic bases but with very related applications. This is the case with intelligence. From the etymological point of view, the term intelligence originates from the Latin language and designates the ability to understand. Over the years, the term has been used for the most diverse areas, from military strategies to business. Today, its lay use has been most associated with logic, good grades, and problem-solving abilities (Cohen et al., 2009)—from a cognitive perspective.

From the beginning of the twenty-first century, the popularization of neural networks, machine learning, and deep learning applications turned the term intelligence strongly associated with computational intelligence (CI), leveraged by an increasingly human-like capacity—especially those related to semantics such as computer vision (CV) and natural language processing (NLP). Recently, the excitement caused by generative models such as Dall-E and ChatGPT for image and text generation, respectively, has contributed to reinforcing the idea of a possible human-like general intelligence with comparable semantic comprehension abilities.

Regarding psychology and computational intelligence, we should observe that the motivation in their studies on intelligence differs: psychology is marked by an interest in individual differences, which began in the 19th century with Francis Galton, while computational intelligence started with artificial intelligence in the mid-20th century, with the expectation of emulating human intelligence focusing its commonality. Despite these differences, both disciplines have contributed significantly to the rise and enhancement of cognitive science (Anderson, 2009; Russel and Norvig, 2009).

Before discussing the validity of a “general” intelligence, it is of fundamental importance to primarily discuss the generalization of the concept of intelligence, since many of them lack formalization, empirical evidence, or a measurement strategy. With this research, and assuming that intelligence is a quality or ability of a given agent or system (whether it is human, biological, or artificial), we seek to equalize such definitions of intelligence from the perspectives of computational intelligence and psychology, thus contributing to the cross-fertilization between these areas, especially for semantic-based applications.

This paper is organized as follows: the main theories and definitions of intelligence are compiled in Section 2, while Section 3 offers an overview of the methods used to measure intelligence in both computing and psychology, including the evaluation of intelligence in the relationship between humans and machines. In Section 4, we claim there is no consensus for intelligence, meaning that the term intelligence is interchangeably used in many fields with similar, opposed, or even contradictory definitions. In Section 5, we finalize this paper with this article's central considerations and contributions.

2. Multiple definitions of intelligence

Intelligence has historically been associated with formalism (Flynn, 2007), resulting from a positivist perspective that originated, among other things, computing itself. Computer science is formalist by definition (Sipser, 2012), although disciplines such as computational intelligence are mostly based on the information theory (Shannon, 1948). Psychology, on the other hand, started from a solid positivist influence and began considering different epistemic bases in its studies since the emergence of psychodynamics (Solms and Turnbull, 2010). In this section, we will review the main concepts and definitions of intelligence found in computational intelligence (Section 2.1) and psychology (Section 2.2) that support or oppose each other.

2.1. Definitions from computational intelligence

As defined by Russel and Norvig (2009), an intelligent agent is an entity that perceives and acts in a given environment through sensors and actuators, respectively. In this definition, the agent function, particular to each agent, determines the action to be taken by the agent in response to any perceived stimulus and is implemented through a program—the latter running on a hardware architecture containing sensors and actuators. The authors point out countless ways to implement the same agent function so that the programs present differences in efficiency, compactness, and flexibility. Therefore, the appropriate program design that will implement the agent function will depend on the nature of the environment in which this agent will be inserted. In the authors' view, this process defines artificial intelligence (AI), which they establish as the science of designing agents.

The same authors suggest that intelligent agents can be categorized into (i) simple reflex agents, when they respond directly to perceptions; (ii) model-based reflex agents, when the response to perceptions occurs through internal states that are not evident; (iii) goal-based agents, whose actions are directed to achieve a previously-defined objective; and, finally, (iv) utility-based agents, who try to maximize their expected utility. In any case, the agent reads the environment state through sensors and acts on the environment through actuators, as shown in Figure 1. In this conception, agents improve themselves through learning, usually based on penalties and rewards. Interestingly, the categorization of intelligent agents into reflex, goal-based, and utility-based types echoes the concept of behaviorism in psychology, where behavior is understood as a response to stimuli in the environment, directed toward a goal or outcome, and shaped by rewards and punishments.

FIGURE 1
www.frontiersin.org

Figure 1. Diagram of a model-based, utility-based agent. An intelligent agent perceives its environment through sensors and acts in this environment through actuators in a way to maximize its utility. Adapted from Russel and Norvig (2009).

From the Russel and Norvig's conception, a human would be defined as a utility-based agent: from an internal model of the world (i.e., the environment), the agent would choose the action that would maximize the expected utility, computed through the average of all possible output states and weighted by the probability of each output. The authors consider happiness as the agent function, which means a human agent would operate to become happier. It should be noticed that the attribution of happiness to the expected utility corroborates the ideas from behaviorism (Skinner, 1965), but does not correspond to the subjective conceptions of happiness or pleasure that can be found in Freud (1920) and Siegel (2010) nor by the concept of expected utility in situations of uncertainty addressed by Gilboa (2010).

Bezdek (1992) proposed the distinction between artificial, biological, and computational intelligence—what he called the “ABCs of neural networks (NN), pattern recognition (PR), and intelligence (I),” whose relationship between concepts is represented through its acronyms in Figure 2. In such categorization, CI is a subset of AI which, in turn, is a subset of biological intelligence in terms of complexity. The same relationship is established for NN: a subset of PR, and, again, in terms of complexity, a subset of intelligence. In a more recent publication on the subject, Bezdek (2016) recognizes that, although this distinction remains valid, the borders between AI and CI are increasingly blurred.

FIGURE 2
www.frontiersin.org

Figure 2. Relationship between biological (B), artificial (A), and computational (C) organizations of neural networks (NN), pattern recognition (PR), and intelligence (I). Adapted from Bezdek (2016).

For instance, deep neural networks, which are a cornerstone of artificial intelligence, are inspired by the structure and function of the human brain, blurring the line between artificial and biological intelligence. Similarly, computational intelligence techniques such as fuzzy logic and genetic algorithms have found applications in a wide range of fields, from finance to medicine, demonstrating the versatility and power of these approaches. Therefore, while the distinction between artificial, biological, and computational intelligence remains a useful conceptual framework, it is important to recognize that the boundaries between these domains are constantly evolving and shifting.

Although without explicitly defining intelligence, Shneiderman (2020) proposes an alternative categorization for artificial intelligence based on goals. The author distinguishes AI research into two major groups: (i) those seeking to emulate human intelligence—for which the author includes “aspiration for humanoid robots, natural language and image understanding, commonsense reasoning, and artificial general intelligence”—and (ii) those seeking to build valuable applications, i.e., “AI-guided products and services” such as “instruments, apps, orthotics, prosthetics, utensils, or implements.”

Pandl et al. (2020) bring an implicit definition of intelligence by stating that “AI enables computers to execute tasks that are easy for people to perform but difficult to describe formally.” To this end, the authors segment artificial intelligence into (i) artificial general intelligence (AGI), an open-domain approach defined by Goertzel and Monroe (2017) as the human-like design of self-organizing and complex adaptive systems; and (ii) narrow artificial intelligence, a domain-specific approach in which the authors include knowledge bases and machine learning methods.

More recently, a conceptualization of intelligence based on its measurement was suggested by Chollet (2019, p. 27), who stated that “intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty.” Such a definition is quite similar to the instrumentalist attempt from psychology to define intelligence, as we will describe in the next Section.

2.2. Definitions from psychology

Psychology approaches intelligence as a construct—a psychometric concept designed to explain or understand a psychological phenomenon (i.e., the latent trait) from its indirect manifestations (i.e., the overt behavior). Cohen et al. (2009) bring the concept of interactionism to explain those theories that attribute intelligence to the interaction between people, consisting of biological and social factors, and these with the environment. In addition, Cohen et al. group factor-analytical theories as those concerned with identifying which factors or set of skills express intelligence. Finally, they call information processing theories those that study the mental processes that result in intelligence.

Historically, intelligence assessment has played an important and controversial role in psychology. Some authors (Simonton, 2003; Cohen et al., 2009) attribute to Francis Galton the beginning of the scientific study of individual differences, who established mathematical techniques such as correlation to measure it—a method that later on would be improved by Pearson and Spearman.

In 1904, Spearman (1961, p. 260) observed that the “correlations calculated between the measurements of different abilities (scores for tests, marks for school subjects, or estimates made on general impression)” may follow a specific mathematical relationship he called “tetrad equations”. Such equations led him to formulate what is known today as factor analysis: a statistical technique that aims to uncover the underlying factors that explain the interrelationships among a set of observed variables. The tetrad equations reflect that the observed variables (e.g., test scores, questionnaire responses) can be modeled as a combination of common and unique factors. Spearman thus formulated the “two-factor theory” of intelligence, which included the specific factor (s), based on the unique ones, and the general factor of intelligence, or g factor, which would be common to any intelligent ability and thus able to measure it.

Arthur Jensen delved into the g factor theory, thus defining intelligence operationally as “the first principal component of an indefinitely large number of highly diverse mental tasks” (Jensen, 1978, p. 112).

In 1905, Alfred Binet and Theodore Simon published a measuring scale of intelligence applied to Parisian children—the first to use a standardized intelligence quotient (IQ) scale. Instead of defining intelligence, Binet limited himself to describing its components: “reasoning, judgment, memory, and abstraction” (Cohen et al., 2009, p. 292). For him, intelligent behavior was a joint result of these abilities, countering Galton's idea that each could be assessed separately.

The indiscriminate and uncritical use of psychological tests in the early twentieth century led Boring (1961) to state, in 1923, that “measurable intelligence is simply what the tests of intelligence test,” thus claiming for a better definition. Flynn (2007) has called this perspective instrumentalism, defined by him as the attempt “to measure by referring to the readings of the measuring instrument,” which is thus “subject to devastating critique.”

In 1939, David Wechsler designed an intelligence test that could be applied to adults, defining intelligence as “the aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment” (Wechsler, 1958). His work originated the gold-standard tests for intelligence known as Wechsler Adult Intelligence Scale (WAIS) and Wechsler Intelligence Scale for Children (WISC). Wechsler's definition stands out since it does not restrict intelligence to cognitive and executive functions but, also, considers the role of conative functions, i.e., those related to affection (Cohen et al., 2009, p. 52).

In 1940s, Cattell (1941) started a work, which was later developed by Horn (1965), that led to a model where two skills were defined: crystallized intelligence (Gc), which includes acquired vocabulary and knowledge, and fluid intelligence (Gf), which is non-verbal and has few cultural influences, such as the numerical memory (Carroll, 2003). Other factors were later added by Horn: visual processing (Gv), auditory processing (Ga), quantitative processing (Gq), speed of processing (Gs), facility with reading and writing (Grw), short-term memory (Gsm), and long-term storage and retrieval (Glr) (Cohen et al., 2009, p. 297).

In the 1950s, Jean Piaget elaborated his theory of intelligence development in children (Piaget, 2003). For him, learning would occur through assimilation and accommodation processes. The first consists of absorbing new data to fit it into already-known information. The second consists of changing the registered information to fit the new one. Piaget's theory states that both processes would occur along the four development stages: (i) the sensorimotor stage, until 2 years, with abilities of goal-directed and intentional behavior, coordination and integration of the five senses, and recognition of the world and its objects as permanent entities; (ii) the preoperational stage, from 2 to 6 years, with concepts understanding largely based on vision, contextual comprehension typically based on a single or obvious aspect of the stimulus, irreversible thought (focus on static states of reality, without making relations between them), animistic thinking (attributing human qualities to non-human objects); (iii) the concrete operations stage, from 7 to 12 years, with conservation of thought (world's attributes remain stable), part-whole problems and serial ordering tasks solving, reasoning based on direct experience, problem-solving through more than one aspect, and present and historical differentiation; and (iv) formal operations, over 12 years, with reasoning also based on indirect experience, hypotheses and test formulations (systematic thinking), complex reasoning through several variables (systematic perception), evaluation of the own thought, and deductive reasoning.

Piaget's theory of cognitive development has been influential in shaping the understanding of human intelligence and its development. In the context of computational intelligence, these concepts are relevant as they provide insights into the development of intelligent systems. Piaget's theory emphasizes the importance of assimilation and accommodation processes in learning, which can be linked to machine learning algorithms that use previously learned information to make predictions or decisions based on new data. In the early stages of development, AI systems may rely on simple rule-based approaches, similar to Piaget's sensorimotor stage. As the system develops, it can progress to more complex approaches that involve reasoning and problem-solving, similar to Piaget's concrete and formal operational stages. It is worth noticing that, according to Piaget's model, only in the last stage, the formal operations period, would a child be able to abstract and fully express formal and semantic abilities. Such skills are not yet fully performed by computational intelligence due to the lack of representation structures for such tasks. In this way, representation learning is a current research topic in CI (Goodfellow et al., 2016) and interfaces the findings from psychology as in Palmer (1999) and Sterling and Laughlin (2015).

The information processing paradigm, described by Palmer (1999, p. 70) as “a way of theorizing about the nature of the human mind as a computational process”, started to be employed in psychological theories from the middle of the twentieth century. Alexander Luria, considered the precursor of neuropsychology (Cohen et al., 2009, p. 300), was the first to conceptualize intelligence from this approach. Luria has demonstrated two ways of information processing: (i) simultaneous or parallel and (ii) successive or sequential. The first would be associated with semantic attribution tasks, while the second would involve tasks that demand attention and linear execution, such as spelling a word. Such a paradigm provides a framework for understanding how intelligent systems process information. The simultaneous or parallel processing, which is associated with semantic attribution tasks, can be linked to approaches such as deep learning in computational intelligence, where multiple layers of neurons process information simultaneously to extract features of different abstraction levels. The successive or sequential processing, associated with tasks that demand attention and linear execution, can be linked to approaches such as reinforcement learning, where an agent learns by interacting with an environment in a sequential manner.

Gottfredson (1997) defined intelligence as “a highly general information processing capacity that facilitates reasoning, problem-solving, decision-making, and other higher order thinking skills.” Also situated in the information processing paradigm, Sternberg (2003) proposed the triarchic theory of intelligence, composed of analytical, creative, and practical aspects. Later, Sternberg (2019) proposed the theory of adaptive intelligence, questioning the general factor of intelligence, its metrics, and theoretical assumptions. His thesis is related to the environmental impact caused by humanity and questions how intelligent such actions are. Thus, he promotes a debate between such conceptions of intelligence from a biological and optimal perspective.

The consideration of a general intelligence factor, the g factor, started to be used again after Carroll's work, which divided intelligence into three strata of abilities: specific, broad, and general (Carroll, 2003). Flynn (2007, p. 48) argued against the g factor, stating it does not provide a robust definition of intelligence but limits it to a comparison approach. He also claimed that intelligence's social and physiological aspects are reduced to the possibility of enhancing or not the significance of g. As an alternative, Flynn proposes a theory that integrates physiological, individual, and social aspects of intelligence, which he calls the BIDS model—an acronym indicating the brain; the individual differences, which he associates with the g factor; and the society.

Recently, McGrew (2009) unified the theories of Cattell-Horn and Carroll in what he called the CHC-Theory. The Cattell-Horn model considered a total of eight abilities distributed through crystallized (Gc) and fluid (Gf) factors of intelligence. In turn, Carroll's model proposed that intelligence should be divided into three hierarchical strata: the first, of specific skills; the second, of complex factors such as Gf and Gc; and the third, the general intelligence factor, or g.

The CHC-model integrates them in the way schematically represented in Figure 3. Some broad abilities (stratum II) are composed of narrow abilities (stratum I) as follow: fluid intelligence (Gf) relates to induction (I), general sequential reasoning (RG), and quantitative reasoning (RQ); crystallized intelligence (Gc) relates to general (verbal) information (KO), language development (LD), lexical knowledge (VL), listening ability (LS), communication ability (CM), grammatical sensitivity (MY), and oral production and fluency (OP); general (domain-specific) knowledge (Gkn) relates to foreign language proficiency (KL), knowledge of signing (KF), skill in lip-reading (LP), geography achievement (AS), general science information (K1), mechanical knowledge (MK), and knowledge of behavioral content (BC); quantitative knowledge (Gq) relates to mathematical knowledge (KM), and mathematical achievement (A3); reading/writing ability (Grw) relates to reading decoding (RD), reading comprehension (RC), reading speed (RS), spelling ability (SG), English usage knowledge (EU), writing ability (WA), and writing speed (WS); short-term memory (Gsm) relates to memory span (MS), and working memory (MW); long-term storage and retrieval (Glr) relates to associative memory (MA), meaningful memory (MM), free-recall memory (M6), naming facility (NA), associational fluency (FA), expressional fluency (FE), sensitivity to problems/alternative solution fluency (SP), originality/creativity (FO), ideational fluency (FI), word fluency (FW), and figural fluency (FF); visual processing (Gv) relates to visualization (Vz), speeded rotation (spatial relations) (SR), closure speed (CS), flexibility of closure (CF), visual memory (MV), spatial scanning (SS), serial perceptual integration (PI), length estimation (LE), perceptual illusions (IL), perceptual alternations (PN), and imagery (IM); auditory processing (Ga) relates to phonetic coding (PC), speech sound discrimination (US), resistance to auditory stimulus distortion (UR), memory for sound patterns (UM), maintaining and judging rhythm (U8), absolute pitch (UP), musical discrimination and judgment (U1 U9), and sound localization (UL); olfactory processing (Go) relates to olfactory memory (OM); tactile abilities (Gh) has no narrow ability; pscyhomotor abilities (Gp) relates to static strength (P3), multilimb coordination (P6), finger dexterity (P2), manual dexterity (P1), arm-hand steadiness (P7), control precision (P8), aiming (A1), and gross body equilibrium (P4); kinesthetic abilities (Gk) has no narrow ability; processing speed (Gs) relates to perceptual speed (P), rate-of-test-taking (R9), number facility (N), reading speed (fluency) (RS), and writing speed (fluency) (WS); decision speed/reaction time (Gt) relates to simple reaction time (R1), choice reaction time (R2), semantic processing speed (R4), mental comparison (R7), and inspection time (IT); psychomotor speed (Gps) relates to speed of limb movement (R3), writing speed (fluency) (WS), speed of articulation (PT), and movement time (MT).

FIGURE 3
www.frontiersin.org

Figure 3. CHC theory of intelligence. Stratum I represents the narrow abilities, as defined and detailed in Schneider and McGrew (2012). Stratum II represents the broad abilities, grouped by Flanagan and Dixon (2014) in reasoning (Gf), acquired knowledge (Gc, Gkn, Gq, Grw), memory and efficiency (Gsm, Glr), sensory (Gv, Ga, Go, Gh), motor (Gp, Gk), and speed and efficiency (Gs, Gt, Gps). Stratum III represents the factor g of general intelligence. Adapted from Flanagan and Dixon (2014).

In fact, the CHC-model adds the g factor to Cattell-Horn's theory and seeks empirical evidence for its validation. For each broad ability (stratum II) in the CHC-model, a set of narrow abilities (stratum I) is considered. As observed by Flanagan and Dixon (2014), the CHC theory is “a dynamic model that is continuously reorganized and restructured based on current research”, a statement reaffirmed by the last update on the CHC model where some factors were removed or rearranged (Schneider and McGrew, 2012).

For last, we must consider that human intelligence allows us to handle both accurate and vague information by computing with numbers and words. This is consistent with the Bayesian brain theory (Friston, 2010; Pouget et al., 2013), one of the most accredited theories in neuroscience sustaining “the brain both represents probability distributions and performs probabilistic inference” based on sensory input and prior models. Human decision-making based on incomplete information is also well-modeled by fuzzy logic (Zadeh, 2000). Recently, a close link between Fuzzy logic and Bayesian inference has been established (Gentili, 2021), shedding light on a possible unified perspective for intelligent behavior.

3. Measuring intelligence

As claimed by Cohen et al. (2009, p. 303), “how one measures intelligence depends in large part on what one conceives intelligence to be.” That said, in this section, we will describe the main intelligence measures supported by the theories and definitions given in Section 2.

3.1. Intelligence measurement in computational intelligence

Russel and Norvig (2009) bring an economic perspective to the field. To them, a performance measure is necessary to evaluate a given agent's behavior in a given environment. Therefore, a rational agent would act to maximize its expected value as a performance measure, that is, maximize its expected utility. According to Gilboa (2010), such a proposition is only reasonable when we understand the utility's meaning and know the probability of such an event, which usually does not occur when dealing with applications under uncertainty.

Inspired by the g factor of intelligence and its measure, the intelligence quotient (IQ), some researchers proposed an equivalent metric for artificial intelligence: the machine intelligence quotient (MIQ). Park et al. (2001) proposed a framework for MIQ measurement in human-machine cooperative systems. These authors affirm that “machine intelligence is the ability to replicate the human mental faculty and to perform human-like.” Following it, they define MIQ as “the measure of autonomy and performance for unanticipated events.” Actually, from their definitions, both uses of intelligence relate to different constructs or phenomena.

Another approach is offered by Ozkul (2009), who defines MIQ as the difference between the control intelligence quotient (CIQ) and human intelligence quotient (HIQ). In his work, HIQ is posed as “the intelligence quantity needed from the human controller for controlling the system,” while CIQ is defined as “the total intelligence required for carrying out all the tasks in the man-machine cooperative system.” Important to notice that an independent definition of intelligence was not provided. Liu and Shi (2014), in turn, compare the performance of the Internet with that of the human brain network and propose a shallow equivalence with the human IQ without accounting for the assumptions of validity and precision necessary for psychometry.

Regarding machine learning and deep learning, the measurement of intelligence is replaced by some kind of performance evaluation. As pointed out by Goodfellow et al. (2016), the performance measure is usually task-specific and it is obtained through well-defined measurements such as accuracy, precision, and recall.

3.2. Intelligence measurement in psychology

If, on the one hand, a definition of intelligence is essential for computation in order to develop better systems and agents, on the other hand, for psychology, its primary justification comes from the need to assess both children and adults, to identify their abilities and limitations due to traumas, illness, special skills, and any other diagnostic requirement.

Most of the theories for intelligence discussed in Section 2.2 sustain an intelligence measuring instrument. It is not the scope of this work to detail each of them. For illustrative purposes, we will follow the discussion based on Wechsler's subtests. In this instrument, the g factor is measured as the full-scale IQ (FSIQ), which is calculated from the verbal IQ (VIQ) and performance IQ (PIQ). VIQ is obtained by two factors (latent variables): the verbal comprehension index (VCI), whose observable variables are the results of the tests of vocabulary, similarities, information, and comprehension; and the working memory index (WMI), whose observable variables are the results for arithmetic, digit-span, and letter-number sequencing. PIQ is also obtained by two factors: the perceptual reasoning index (PRI), calculated over the results for picture completion, block design, matrix reasoning, visual puzzles, and figure weights; and the processing speed index (PSI), calculated over the results for coding, symbol search, and cancellation.

Nowadays, the Wechsler's tests validity is supported by the CHC theory (Weiss et al., 2006; Grégoire, 2013; Scheiber, 2016). The following relationship is suggested by Grégoire (2013): the verbal comprehension index (VCI) relates to crystallized intelligence (Gc); the working memory index (WMI) relates to short-term memory (Gsm); the perceptual reasoning index (PRI) relates to fluid intelligence (Gf) and visual processing (Gv); and the processing speed index (PSI) relates to processing speed (Gs). Scheiber (2016) suggests a slightly different relationship with the CHC model for the arithmetic subtest of the Wechsler Intelligence Scale for Children - 5th Edition (WISC-V), which would measure not only short-term memory (Gsm) but also fluid intelligence (Gf) and crystallized intelligence (Gc).

Incidentally, the study of psychic abnormalities and their correlation with neural anatomy—i.e., the anatomy-clinical method—has led to great findings from Charcot1 to modern computational neuroscience (Goetz, 2009). An example of the before-mentioned correlation between adjacent neural processes and intelligence tests is found in the following quote:

“The Picture Vocabulary subtest was developed to further reduce the nonspecific language demands of the Vocabulary test. (..) The child must have the capacity to interpret pictures into their semantic representation. (..) For the vocabulary subtests, there are a number of hypotheses to test when the examinee receives a low score (..): are low scores due to impaired auditory or semantic decoding, semantic productivity, access to semantic knowledge and/or executive functioning, or limited auditory working memory capacity?” (Weiss et al., 2006, p. 217).

The above description indicates how semantic processes relate to the sensory inputs of vision and hearing and executive, memory, and language functions. Therefore, the relevance of such diagnostic hypotheses goes beyond clinical assessment. It can inspire and direct research on the issues presented in Section 1, as the enhancement of computational models to explain semantic attribution and to enable semantic emulation in CI applications.

Finally, it is worth mentioning another point of convergence between the areas we are working with: the computer-assisted psychological assessment (CAPA), which includes not only the test administration but scoring, interpretation, and development. Such an approach can be exemplified by the recent works from Luo et al. (2020) and Martins and Baumard (2022). As pointed by Cohen et al. (2009, p. 78), “CAPA has become more the norm than the exception.” However, the authors identify some issues regarding this strategy that need to be addressed: information security concerns, comparability between the pencil-and-paper test and its computerized version; the validity of test interpretations by computational intelligence methods, and the unprofessional use of online psychological tests.

3.3. Intelligence measurement as the interface between humans and artificial systems

Two theoretical tests played an important role in AI development and speculated its limits: the Turing test and Searle's Chinese room. Initially published as “the game of imitation,” the Turing test proposes that a human observer interrogates a computer and another human without knowing their identities. Once the computer deceives the human observer, it could be claimed that it thinks or expresses human intelligent behavior (Turing, 1950).

Later, in the 1980s, when criticizing functionalism—a theory of mind that assumes that two isomorphic systems, given the same sensory inputs, would have identical mental states—John Searle defended the idea of biological naturalism—in which mental states are high-level abstractions that emerge from low-level instances, physically supported on neurons. For this, he proposed the Chinese room experiment, where a human knowing only the English language and equipped with an instruction book would perform operations written in Chinese. For an outside observer, the English-speaking human would be understanding Chinese and would pass the Turing test. However, the mere manipulation of symbols would not guarantee him any understanding (Russel and Norvig, 2009, p. 1031).

From a practical standpoint, when evaluating the interface between humans and artificial systems, rough comparisons are often used. For example, in formal reasoning tasks like arithmetic, human performance is compared to computational performance. In semantic-related applications, the algorithm's performance is typically measured against a dataset that one or more humans have annotated. These practical comparisons are in contrast to the philosophical debates about whether artificial intelligence can fully replace or emulate human intelligence, which often center around the Turing test and Searle's Chinese room experiment.

It sounds plausible to include in this section the answer given by an artificial intelligence agent itself, the ChatGPT2,3 (OpenAI, 2023) when asked about the differences between human and artificial intelligence:

(...) “Human intelligence is the product of the complex interactions between the brain's neurons and the environment, resulting in the ability to learn, reason, and solve problems. On the other hand, AI refers to the ability of machines or computer systems to perform tasks that typically require human-like intelligence, such as language processing, visual perception, decision-making, and problem-solving. (...) Finally, human intelligence is influenced by factors such as emotion, motivation, and context, whereas AI is typically designed to operate in a context-free environment and lacks emotional awareness or subjective experience.”

The above answer appears satisfactory and could easily be alleged to a student. In other words, it passes the Turing test, but as Searle advocated, it is nothing else than complex and effective symbol manipulation. From this perspective, a lack of a standard definition of intelligence between areas with distinct epistemic bases, as introduced in Section 1, sustains the problem posed by Searle. Computing seeks to generalize intelligence through its codification in algorithms. In formal reasoning (e.g., arithmetic), indeed, the result must be unanimous and general. However, on the other hand, evaluating the semantic performance of an algorithm trained from a dataset annotated by humans implies assuming that such annotations would be similar throughout humanity. It may be for some concepts, but numerous studies expose the subjective and particular character of any meaning attribution (Lakoff, 1990).

4. There is no consensus for intelligence

We presented and discussed the main theories for intelligence, summarized in Table 1 from computational intelligence and, in Table 2, from psychology. We propose a categorization of the intelligence scope based on the theory extension: general (G), open-domain (OD), or specific-domain (SD). Following this, we also categorized the modeling approach based on the epistemic foundations in the definition or theorization of intelligence, which can be either formal-logical (FL), instrumentalist (INS), information processing (IP), interactionist (INT), factor-analytical (FA), or skill-based (SB). For each author, we also identified the measurement approach, whether through psychometric (PSY), observational (OBS), or performance metrics (PM).

TABLE 1
www.frontiersin.org

Table 1. Intelligence definitions from computational intelligence and their respective authors.

TABLE 2
www.frontiersin.org

Table 2. Intelligence definitions from psychology and their respective authors.

As a matter of fact, Tables 1, 2 lack a standard and general definition for intelligence, comprehensive to computational intelligence (mostly objective and formal) and psychology (mostly subjective). Since long ago, Neisser emphasized many times in his studies that a widely accepted definition of intelligence remains a challenge (Neisser, 1979; Neisser et al., 1996). In this paper, we reinforced such an idea by showing the many conceptualizations and divergences around this subject.

Flynn (2007, p. 50) observed that Jensen stopped using the term intelligence due to its lack of precision and consensus, referring to mental abilities the construct measured by g. The same absence is pointed out by Rindermann et al. (2020) in a survey from the Internet-based Expert Questionnaire on Cognitive Ability (EQCA), where different opinions on the g factor, intelligence measurement, and controversial issues were collected from the participants. Thus the models that define—and therefore explain—intelligence still need to be further elaborated. Other issues must also be taken, such as cross-cultural variation and human-machine interfaces. Flynn (2007, p. 54) says that “different societies have different values and attitudes that determine what cognitive problems are worth the investment of mental energy.” Following this, one can wonder if it is reasonable to think about a general intelligence.

Garlick (2002), after reviewing the main theories and models regarding intelligence, proposes a conceptual integration of the models from neural and cognitive sciences with the psychometric-based theory of general intelligence:

“These approaches initially seem contradictory because neuroscience and cognitive science argue that different intellectual abilities would be based on different neural circuits and that the brain would require environmental stimulation to develop these abilities. In contrast, intelligence research argues that there is a general factor of intelligence and that it is highly heritable. However, it was observed that if people differed in their ability to adapt their neural circuits to the environment, a general factor of intelligence would result. Such a model can also explain many other phenomena observed with intelligence that are currently unexplained” (Garlick, 2002, p. 131).

As the author emphasizes, further research is needed to provide an intelligence model that satisfactorily explains its manifestations. In neuropsychology, the CHC model has been considered the most robust and efficient one—we highlight here the role of psychological assessment and psychometrics to validate the theory a test is based upon since it brings empirical evidence. However, despite the broad and narrow abilities proposed by the CHC model, depicted in Figure 3, many of them are still unexplored by intelligence measurement instruments in psychology and even more rarely discussed in computational intelligence applications.

The naiveness of the concept of intelligence in computing is deeply debated in recent works (Korteling et al., 2021; Zhao et al., 2022). Its relationship with human intelligence is observed, for instance, from the following statement:

“In the early days of artificial intelligence, the field rapidly tackled and solved problems that are intellectually difficult for human beings but relatively straightforward for computers—problems that can be described by a list of formal, mathematical rules. The true challenge to artificial intelligence proved to be solving the tasks that are easy for people to perform but hard for people to describe formally—problems that we solve intuitively, that feel automatic, like recognizing spoken words or faces in images (Goodfellow et al., 2016, p. 1)”.

Since many AI and CI applications are related to cognitive and neuropsychic functions, such as vision, language, and decision-making, the equivalent robustness of the term intelligence found in psychology must be sought in computing for a consensus in the definition and measurement of intelligence among all its related areas.

5. Conclusions and future research

In this paper, we explored what intelligence is both in computational intelligence and psychology. Our first contribution was to show that, although these areas have a strong relationship, a common definition for intelligence is still needed. In the same way, we explored how intelligence is measured in these areas through instruments supported by the given theories. Secondly, we discussed that there is no consensus for intelligence even in the same area. This fact shows the need for a common concept to guide human and computational intelligence since divergent concepts about intelligence can bias measurements and mislead applications.

Based on this, we propose that intelligence is an agent's ability to process external and internal information to find an optimum adaptation (decision-making) to the environment according to its ontology and then decode this information as an output action. This definition is compatible with the CHC model. However, applying the CHC model in computational intelligence can be arduous, given that these agents have considerably different ontological and epistemological bases. Therefore, we propose a categorization of intelligence in the following aspects: (i) formal intelligence, based on reasoning and logical representation; (ii) semantic intelligence, characterized by meaning attribution for both vague and accurate information; (iii) contextual intelligence, an optimization scheme that takes into account an agent's ontology and the state of the environment; (iv) social or affective intelligence, which involves interaction between agents and is dependent on their respective ontologies; and (v) processing resources, the biological or digital substrates that enable any intelligent expression.

It is important to note that the definition we propose is the result of a theoretical synthesis of the studies presented in this paper while also considering the prevailing scientific paradigms and their inherent conflicts, such as objectivism vs. subjectivism, determinism vs. indeterminism, and formal-logics vs. information processing. We believe this categorization provides a comprehensive framework for further research on intelligence by promoting a collaborative perspective across different disciplines. In our future study, we aim to establish an empirical foundation for the proposed definition and encourage other researchers to explore this avenue as well.

Our proposal raises several research questions, including the possibility of a unified concept of intelligence that abstracts the ontology of each agent (i.e., the biological constitution vs. digital) and how to unify the measurements of this single theory. Additionally, we need to evaluate and differentiate narrow abilities from computational and human agents. Notably, the CHC model is a robust theory for intelligence, but many of its broad and narrow abilities are still underexplored in psychology and rarely mentioned in computational intelligence applications. These remaining questions must be addressed in future research.

Author contributions

TD formulated the research manuscript idea, provided substantial edits to the manuscript and final draft, and aided in the interpretation of the manuscript. HL provided substantial edits to the manuscript and the final draft and aided in the interpretation of the manuscript. All authors approved the submitted version.

Acknowledgments

HL would like to thank CNPq for the research Grant 311785/2019-0. TD and HL thank the reviewers for their valuable contributions to the paper.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1209761/full#supplementary-material

Footnotes

1. ^Jean-Martin Charcot (1825–1893) was a French neurologist who is considered a pioneer in the field of clinical neurology. He is known for his contributions to the understanding and treatment of several neurological disorders, including multiple sclerosis and Parkinson's disease. He was also a teacher and mentor to many influential figures in the field of medicine, including Sigmund Freud.

2. ^ChatGPT is “a large-scale, multimodal model which can accept image and text inputs and produce text outputs,” OpenAI (2023) developed and trained under a deep learning perspective.

3. ^The complete transcript of the interaction with the ChatGPT-3.5 prompt is available as Supplementary material for this paper.

References

Anderson, J. R. (2009). Cognitive Psychology and Its Implications, 7th Edn. New York, NY: Worth Publishers.

Google Scholar

Bezdek, J. C. (1992). On the relationship between neural networks, pattern recognition and intelligence. Int. J. Approx. Reason. 6, 85–107.

Google Scholar

Bezdek, J. C. (2016). (Computational) intelligence: what's in a name? IEEE Syst. Man Cybernet. Mag. 2, 4–14. doi: 10.1109/MSMC.2016.2558778

CrossRef Full Text | Google Scholar

Binet, A., and Simon, T. (1904). Méthodes nouvelles pour le diagnostic du niveau intellectuel des anormaux. L'année Psychologique. 11, 191–244. doi: 10.3406/psy.1904.3675

CrossRef Full Text | Google Scholar

Boring, E. G. (1961). “Intelligence as the tests test it,” in Studies in Individual Differences: The Search for Intelligence (East Norwalk, CT: Appleton-Century-Crofts), 210–214.

Google Scholar

Carroll, J. B. (2003). “The higher-stratum structure of cognitive abilities. Current evidence supports g and about ten broad factors,” in The Scientific Study of General Intelligence: Tribute to Arthur Jensen (Elsevier), 5–21.

Google Scholar

Cattell, R. B. (1941). Some theoretical issues in adult intelligence testing. Psychol. Bull. 38, 592.

Google Scholar

Chollet, F. (2019). On the measure of intelligence. arXiv preprint arXiv:1911.01547. doi: 10.48550/arXiv.1911.01547

CrossRef Full Text | Google Scholar

Cohen, R. J., Swerdlik, M. E., and Sturman, E. D. (2009). Psychological Testing and Assessment: An Introduction to Tests and Measurement, 7th Edn. McGraw-Hill.

Google Scholar

Flanagan, D. P., and Dixon, S. G. (2014). “The Cattell-Horn-Carroll theory of cognitive abilities,” in Encyclopedia of Special Education (Hoboken, NJ: John Wiley & Sons, Inc.). doi: 10.1002/9781118660584.ese0431

PubMed Abstract | CrossRef Full Text | Google Scholar

Flynn, J. R. (2007). What Is Intelligence?: Beyond the Flynn Effect. Cambridge University Press.

PubMed Abstract | Google Scholar

Freud, S. (1920). “Beyond the pleasure principle,” in The Standard Edition of the Complete Psychological Works of Sigmund Freud, Vol. XVIII (London: Hogarth Press).

Google Scholar

Friston, K. (2010). The free-energy principle: a unified brain theory? Nat. Rev. Neurosci. 11, 127–138. doi: 10.1038/nrn2787

PubMed Abstract | CrossRef Full Text | Google Scholar

Garlick, D. (2002). Understanding the nature of the general factor of intelligence: the role of individual differences in neural plasticity as an explanatory mechanism. Psychol. Rev. 109, 116–136. doi: 10.1037/0033-295X.109.1.116

PubMed Abstract | CrossRef Full Text | Google Scholar

Gentili, P. L. (2021). Establishing a new link between fuzzy logic, neuroscience, and quantum mechanics through Bayesian probability: perspectives in artificial intelligence and unconventional computing. Molecules 26, 5987. doi: 10.3390/molecules26195987

PubMed Abstract | CrossRef Full Text | Google Scholar

Gilboa, I. (2010). Theory of Decision Under Uncertainty. New York, NY: Cambridge University Press.

Google Scholar

Goertzel, B., and Monroe, E. (2017). “Toward a general model of human-like general intelligence,” in AAAI Fall Symposium - Technical Report, 344–347.

Google Scholar

Goetz, C. G. (2009). “Jean-Martin Charcot and the anatomo-clinical method of neurology,” in History of Neurology, Vol. 95, eds M. J. Aminoff, F. Boller, and D. F. Swaab (Elsevier), 203–212.

PubMed Abstract | Google Scholar

Goodfellow, I., Bengio, Y., Courville, A., and Bengio, Y. (2016). Deep Learning, Vol. 1. Cambridge: MIT Press.

Google Scholar

Gottfredson, L. S. (1997). Why g matters: the complexity of everyday life. Intelligence 24, 79–132.

Google Scholar

Grégoire, J. (2013). Measuring components of intelligence: mission impossible? J. Psychoeduc. Assess. 31, 138–147. doi: 10.1177/0734282913478034

CrossRef Full Text | Google Scholar

Horn, J. L. (1965). Fluid and Crystallized Intelligence: A Factor Analytic Study of the Structure Among Primary Mental Abilities. University of Illinois at Urbana-Champaign, Champaign, IL.

Google Scholar

Howard, G. (1983). Frames of Mind: The Theory of Multiple Intelligences, 1st Edn. New York, NY: Basic Books.

Google Scholar

Jensen, A. R. (1978). The nature of intelligence and its relation to learning. Melbourne Stud. Educ. 20, 107–133.

Google Scholar

Korteling, J. E., van de Boer-Visschedijk, G. C., Blankendaal, R. A., Boonekamp, R. C., and Eikelboom, A. R. (2021). Human- versus artificial intelligence. Front. Artif. Intell. 4, 622364. doi: 10.3389/frai.2021.622364

PubMed Abstract | CrossRef Full Text | Google Scholar

Lakoff, G. (1990). Women, Fire, and Dangerous Things. University of Chicago Press.

Google Scholar

Liu, F., and Shi, Y. (2014). The search engine IQ test based on the internet IQ evaluation algorithm. Proc. Comput. Sci. 31, 1066–1073. doi: 10.1016/j.procs.2014.05.361

CrossRef Full Text | Google Scholar

Luo, H., Cai, Y., and Tu, D. (2020). Procedures to develop a computerized adaptive testing to advance the measurement of narcissistic personality. Front. Psychol. 11, 1437. doi: 10.3389/fpsyg.2020.01437

PubMed Abstract | CrossRef Full Text | Google Scholar

Luria, A. R. (1973). The Working Brain: An Introduction to Neuropsychology. Basic Books.

Google Scholar

Martins, M. J. D., and Baumard, N. (2022). How to develop reliable instruments to measure the cultural evolution of preferences and feelings in history? Front. Psychol. 13, 786229. doi: 10.3389/fpsyg.2022.786229

PubMed Abstract | CrossRef Full Text | Google Scholar

McGrew, K. S. (2009). CHC theory and the human cognitive abilities project: standing on the shoulders of the giants of psychometric intelligence research. Intelligence 37, 1–10. doi: 10.1016/j.intell.2008.08.004

CrossRef Full Text | Google Scholar

Neisser, U. (1979). The concept of intelligence. Intelligence 3, 217–227.

Google Scholar

Neisser, U., Boodoo, G., Bouchard, T. J. Jr., Boykin, A. W., Brody, N., Ceci, S. J., et al. (1996). Intelligence: knowns and unknowns. Am. Psychol. 51, 77.

Google Scholar

OpenAI (2023). GPT-4 technical report. arXiv preprint arXiv:2303.08774.

Google Scholar

Ozkul, T. (2009). “Cost-benefit analyses of man-machine cooperative systems by assesment of machine intelligence quotient (MIQ) gain,” in 2009 6th International Symposium on Mechatronics and its Applications, ISMA 2009, 5–10.

Google Scholar

Palmer, S. E. (1999). Vision Science: Photons to Phenomenology. The MIT Press.

Google Scholar

Pandl, K. D., Thiebes, S., Schmidt-Kraepelin, M., and Sunyaev, A. (2020). On the convergence of artificial intelligence and distributed ledger technology: a scoping review and future research agenda. IEEE Access 8, 57075–57095. doi: 10.1109/ACCESS.2020.2981447

CrossRef Full Text | Google Scholar

Park, H. J., Kim, B. K., and Lim, K. Y. (2001). Measuring the machine intelligence quotient (MIQ) of human-machine cooperative systems. IEEE Trans. Syst. Man Cybernet. A Syst. Hum. 31, 89–96. doi: 10.1109/3468.911366

CrossRef Full Text | Google Scholar

Piaget, J. (2003). The Psychology of Intelligence. Routledge.

Google Scholar

Pouget, A., Beck, J. M., Ma, W. J., and Latham, P. E. (2013). Probabilistic brains: knowns and unknowns. Nat. Neurosci. 16, 1170–1178. doi: 10.1038/nn.3495

PubMed Abstract | CrossRef Full Text | Google Scholar

Rindermann, H., Becker, D., and Coyle, T. R. (2020). Survey of expert opinion on intelligence: intelligence research, experts' background, controversial issues, and the media. Intelligence 78, 101406. doi: 10.1016/j.intell.2019.101406

CrossRef Full Text | Google Scholar

Russel, S., and Norvig, P. (2009). Artificial Intelligence: A Modern Approach, 3rd Edn. Prentice Hall.

Google Scholar

Scheiber, C. (2016). Is the Cattell–Horn–Carroll-based factor structure of the Wechsler intelligence scale for children–fifth edition (WISC-V) construct invariant for a representative sample of African–American, hispanic, and Caucasian male and female students ages 6 to 16 years. J. Pediatr. Neuropsychol. 2, 79–88. doi: 10.1007/s40817-016-0019-7

CrossRef Full Text | Google Scholar

Schneider, W. J., and McGrew, K. S. (2012). “The cattell-horn-carroll model of intelligence,” in Contemporary Intellectual Assessment: Theories, Tests, and Issues, eds D. P. Flanagan and P. L. Harrison (The Guilford Press), 99–144.

Google Scholar

Shannon, C. (1948). The mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423.

Google Scholar

Shneiderman, B. (2020). Design lessons from AI's two grand goals: human emulation and useful applications. IEEE Trans. Technol. Soc. 1, 73–82. doi: 10.1109/TTS.2020.2992669

CrossRef Full Text | Google Scholar

Siegel, D. J. (2010). Mindsight: The New Science of Personal Transformation. Bantam.

PubMed Abstract | Google Scholar

Simonton, D. K. (2003). “Genius and g,” in The Scientific Study of General Intelligence (Elsevier), 229–245.

Google Scholar

Sipser, M. (2012). Introduction to the Theory of Computation. Cengage Learning.

Google Scholar

Skinner, B. F. (1965). Science and Human Behavior. Simon and Schuster.

Google Scholar

Solms, M., and Turnbull, O. (2010). The Brain and the Inner World: An Introduction to the Neuroscience of Subjective Experience. Other Press Professional.

Google Scholar

Spearman, C. (1961). “The abilities of man,” in Studies in Individual Differences: The Search for Intelligence (East Norwalk, CT: Appleton-Century-Crofts), 241–266.

Google Scholar

Sterling, P., and Laughlin, S. (2015). Principles of Neural Design. MIT Press.

Google Scholar

Sternberg, R. J. (2003). ““My house is a very very very fine house”–But it is not the only house,” in The Scientific Study of General Intelligence (Elsevier), 373–395.

Google Scholar

Sternberg, R. J. (2019). A theory of adaptive intelligence and its relation to general intelligence. J. Intell. 7, 1–17. doi: 10.3390/jintelligence7040023

PubMed Abstract | CrossRef Full Text | Google Scholar

Turing, A. M. (1950). I.—Computing machinery and intelligence. Mind. LIX, 433–460. doi: 10.1093/mind/LIX.236.433

CrossRef Full Text | Google Scholar

Wechsler, D. (1958). The Measurement and Appraisal of Adult Intelligence, 4th Edn. Williams & Wilkins Co.

Google Scholar

Weiss, L. G., Saklofske, D. H., Prifitera, A., and Holdnack, J. A. (2006). WISC-IV Advanced Clinical Interpretation. Elsevier.

Google Scholar

Zadeh, L. A. (2000). From computing with numbers to computing with words-from manipulation of measurements to manipulation of perceptions! Adv. Soft Comput. 6, 3–37. doi: 10.1007/978-3-7908-1841-3_1

CrossRef Full Text | Google Scholar

Zhao, J., Wu, M., Zhou, L., Wang, X., and Jia, J. (2022). Cognitive psychology-based artificial intelligence review. Front. Neurosci. 16, 1024316. doi: 10.3389/fnins.2022.1024316

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: human intelligence, computational intelligence, artificial intelligence, intelligence measurement, IQ, ChatGPT, CHC model, semantics

Citation: Da Silveira TBN and Lopes HS (2023) Intelligence across humans and machines: a joint perspective. Front. Psychol. 14:1209761. doi: 10.3389/fpsyg.2023.1209761

Received: 21 April 2023; Accepted: 01 August 2023;
Published: 17 August 2023.

Edited by:

Eddy J. Davelaar, Birkbeck, University of London, United Kingdom

Reviewed by:

Joachim Funke, Heidelberg University, Germany
Pier Luigi Gentili, Università degli Studi di Perugia, Italy

Copyright © 2023 Da Silveira and Lopes. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Tiago Buatim Nion Da Silveira, tbnsilveira@gmail.com

ORCID: Heitor Silvério Lopes orcid.org/0000-0003-3984-1432

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.