- 1Faculty of Mathematics, Computer Science, Physics, Electrical Engineering and Information Technology, Brandenburg University of Technology Cottbus—Senftenberg, Cottbus, Germany
- 2Faculty of Mathematics and Geography, Catholic University Eichstätt—Ingolstadt, Eichstätt, Germany
- 3Bernstein Center for Computational Neuroscience, Berlin, Germany
The concept of intelligent agents is—roughly speaking—based on an architecture and a set of behavioral programs that primarily serve to solve problems autonomously. Increasing the degree of autonomy and improving cognitive performance, which can be assessed using cognitive and behavioral tests, are two important research trends. The degree of autonomy can be increased using higher-level psychological modules with which needs and motives are taken into account. In our approach we integrate these modules in architecture for an embodied, enactive multi-agent system, such that distributed problem solutions can be achieved. Furthermore, after uncovering some weaknesses in the cognitive performance of traditionally designed agents, we focus on two major aspects. On the one hand, the knowledge processing of cognitive agents is based on logical formalisms, which have deficiencies in the representation and processing of incomplete or uncertain knowledge. On the other hand, in order to fully understand the performance of cognitive agents, explanations at the symbolic and subsymbolic levels are required. Both aspects can be addressed by quantum-inspired cognitive agents. To investigate this approach, we consider two tasks in the sphere of Shannon's famous mouse-maze problem: namely classifying target objects and ontology inference. First, the classification of an unknown target object in the mouse-maze, such as cheese, water, and bacon, is based on sensory data that measure characteristics such as odor, color, shape, or nature. For an intelligent agent, we need a classifier with good prediction accuracy and explanatory power on a symbolic level. Boolean logic classifiers do work on a symbolic level but are not adequate for dealing with continuous data. Therefore, we demonstrate and evaluate a quantum-logic-inspired classifier in comparison to Boolean-logic-based classifiers. Second, ontology inference is iteratively achieved by a quantum-inspired agent through maze exploration. This requires the agent to be able to manipulate its own state by performing actions and by collecting sensory data during perception. We suggest an algebraic approach where both kinds of behaviors are uniquely described by quantum operators. The agent's state space is then iteratively constructed by carrying out unitary action operators, while Hermitian perception operators act as observables on quantum eigenstates. As a result, an ontology emerges as the simultaneous solution of the respective eigenvalue equations.
1. Introduction
In the early years of cybernetics, the Josuah Macy Jr. Foundation [1] sponsored a series of conferences, where the pioneers of the emerging information and communication sciences had come together to develop the crucial concepts of information and control in animals and machines. Among the conference attendants were well-known scientists from mathematics, engineering, psychology, humanities, biology, and related fields, such as Margaret Mead, John von Neumann, Norbert Wiener, Heinz von Foerster, Warren McCulloch, Walter Pitts, and Claude E. Shannon, who had published their groundbreaking monographs such as “Cybernetics or Control and Communication in the Animal and the Machine” [2], “The Computer and the Brain” [3] or “The Mathematical Theory of Communication” [4].
Shannon was particularly interested in clarifying the concept of information without referring to the vague notion of “meaning” [4]. To this aim, Shannon presented an electromechanical mouse-maze system, where a cognitive agent, an artificial mouse dubbed “Theseus” (after the ancient Greek hero who overcame the mythological Minotaur monster in the Cretan labyrinth) had to find its way out of the maze [5]. Shannon's testing arrangement consisted of 25 maze fields on which walls could have been arbitrarily erected for probing different labyrinth configurations. The magnetic mouse “Theseus” was driven by a motorized electromagnet beneath the maze floor. Its motor pairs allowed the mouse to navigate through the maze in all four geographic directions. Under the floor of the labyrinth, there was an arrangement of relays that formed the agent's “memory”. Shannon had demonstrated that “Theseus” was able to explore the labyrinth using relay switching to memorize events such as bouncing the wall. Notably, “Theseus” always found the exit of the maze by the principle of trial and error even when the agent was initially located in different fields of the maze.1
Both Theseus characters, the artificial agent and the mythological figure solve their tasks to exit the labyrinth by external means. On the one hand, Theseus was equipped with a long thread by princess Ariadne (the “Ariadne thread”) in the Greek legend to memorize the path back to the exit. On the other hand, Shannon's mouse exploited the switching states of the relay cells in a similar fashion. This simple technological solution is not feasible in the context of contemporary cognitive systems research for the following reasons. First, the artificial agent does not distinguish between an outer and an inner world, since the acquisition of information, as well as the release of information via sensors and actuators, is not differentiated according to attributes and effects. This means that target and auxiliary objects cannot be differentiated by means of their attributes and that no internal representations can be built, processed, and stored either. Second, for neither needs, motives, nor goals are taken into account, the behavior of the agent resembles the principle of classical conditioning, where no goal-oriented decisions among different actions take place.
In order to cope with these difficulties, the opposite strategy is employed in current cognitive systems research. There, embodied and enactive artificial agents [6] are endowed with both, concepts of meaning and concepts of failure. One firstly assumes the existence of an external world consisting of objective entities with their respective properties and with different relationships between these objects. Philosophically speaking, there is an ontology of things as they are in the outer world, independent of any perception or observation. Encountering the objects, their properties and the relations between different objects of this external world, the cognitive agent performs interactions in form of perceptions to experience the state of the world and in form of actions to modify the state of the world. Thus, the agent must be able to create, process, and store its own reactions to the external objects in its own inner world. In traditional philosophy of mind and artificial intelligence research, such reactions are conventionally dubbed “representations” [7, 8], which is highly controversial in current discourse [6, 9–11] and will be addressed subsequently. In order to recognize deceptions and failures, the agent must also be able to differentiate between these different worlds and recognize mistakes for avoiding misfortune. Learning, thinking and adaptation processes are triggered by such mistakes. Using philosophical terminology again, the agent requires an epistemology of the things of the outer world, as they appear in its subjective internal world of representations. Eventually, the epistemic representations of ontic properties and relations should be as faithful as possible, or veridical [7, 12, 13].
1.1. Cognitive agents
With the concepts of intelligent agents and cognitive dynamical systems [14] a large number of research directions in artificial intelligence (AI) and machine learning (ML) were integrated into AI systems that are characterized by a high degree of autonomy. Taking a rough technical view, an intelligent agent can be described by architecture and its behavioral programs. Essentially, behavioral programs comprise the data to be processed and some mathematical algorithms. In contrast, cognitive architectures serve to integrate psychological findings in a formal model that is as economical as possible and capable of being simulated. It assumes that all cognitive processes can be traced back to a few basic principles. According to [15], this means that cognitive architectures have suitable representational data structures, that they support the principles of composition, adaptation, and classification, and that they are autonomously capable to gain knowledge through logical reasoning and learning. Further criteria are productivity, robustness, scalability, and compactness. In psychological research, these architectures are available as computer programs, which are used to empirically test psychological theories. In contrast, the utility of cognitive architectures in AI research lies primarily in the construction of intelligent machines and in the ability to explain their behavior.
The starting point of our own integrative approach is an embodied, enactive cognitive multi-agent system (MAS) [6, 16], which we refer to as the DAGHT architecture that consists of five different agents, motivated by other mythological characters besides Shannon's Theseus. Each agent fulfills its own specific tasks in terms of distributed intelligence and they are mutually dependent on one another. Figure 1 shows the hierarchically ordered three-layer structure in which the five agents are reciprocally connected in a communication network [17]. The structural requirements for the DAGHT architecture result from the psychological superstructure consisting of needs, motives, and goals at the highest level. The units at the intermediate level deliver behavioral programs as well as model-based symbolic information processing that is embedded in a perception-action cycle (PAC) at the lowest level, ensuring enactive embodiment within the external world.
Figure 1. Multi-Agent-System. The DAGHT-architecture results from the analysis of natural organisms (biological cells) and technical artifacts (embedded Turing Machines) as well as traditional cognitive architectures. Necessities or needs are given by a system designer (“Demiurge”), motives and goal directed behavior are implemented by the hero “Theseus”. The ability to allow permissions in terms of the interaction between the agent and the environment is associated with the “Argus” agent. The perception-action cycle controls information exchange with the environment. It is separated into interaction and communication. Interaction is within the scope of the “Golem” agent, it comprises the control of physical system components and the assessment of target objects. The “Golem” agent itself is controlled by the “Homunculus” agent which is working on representational data structures and is additionally speech-enabled.
The agents in Figure 1 are named upon the characteristics of some mythological figures, namely “Demiurge” (D), “Argus” (A), “Golem” (G), “Homunculus” (H), and “Theseus” (T). On the lowest level, we find the perception-action cycle which involves the characters Golem and Homunculus. Golem fulfills the tasks that typically occur at the system boundary: the enactive execution of instructions by actuators, the perception of measurements using sensors, or the implementation of reflexive behavior. It processes signal-related symbolic data structures. Homunculus is in charge of the interpretation of sensory input, the articulation of actuator instructions, and the goal-directed selection of actions. In contrast to Golem, it operates on representational symbolic data structures and is equipped with a linguistic user interface. The psychological superstructure builds on the lower level of the PAC. This is structured by another three agents. Theseus provides action-guiding behavioral programs in the sense of the psychological category of “ought”. This includes, e.g., exploration and finding objects as well as programs of thinking, learning, interaction, and communication processes. We understand embodied adaptive performance in the sense of homeostasis in which the internal state of the system has to be compared with the external state of the system environment. The determination of the external condition requires a model of the system environment which must meet the veridicality requirement. Such a model contains representations of objects and their relationships to the system environment which creates the Homunculus agent, that operates logically in controlling behavior. For model construction, comparison with reality, model adaptation, and the exchange of information with the system environment are mandatory. This, however, can only be performed via the Golem agent. Furthermore, the MAS has protective functions that fall within the sphere of a guardian agent, named Argus. It determines what the agent can do and limits possible actions between Golem and the environment. We regard the physical interaction that includes such protective functions with which the functionality and viability of the system can be maintained as well as access and authenticity controls, e.g., object classification in terms of edible or inedible [18].
Argus and Theseus are located on the intermediate level of the MAS. Already at this level, access to the representational data structures of the inner model (construction plan) and the outer model (environment) takes place, so that the behavior controlling mechanisms can implement the corresponding adaptations on various time scales. These include reflex, instinct, and coping behavior, taking effect over time into ontogenesis (learned behavioral programs) and actual genesis (perception-action cycle). Adaptations that affect the construction plan and could be passed to “descendants” take place on the time scale of phylogenesis. However, changes to the building plan are only reserved for the Demiurge, heading the MAS. In addition to maintaining the construction plan, it is also in charge of “wanting”. The psychological category of desire is determined by needs that can only be satisfied but not changed. Therefore, adjustment services to the construction plan can only be provided if they do not conflict with the given needs. We understand the Demiurge as a character that—similar to the cell nucleus—has the blueprint, and can access and change it. Changes in the blueprint, therefore, take place on the phylogenetic time scale, so that an evolutionary development may happen also.
Furthermore, a cognitive dynamical system has to take the distinction between the outer perspective upon the environment as described by ontology and also the inner perspective of an epistemic model of the environment by means of mental representations into account. These mediating representations have to be veridical as well. The latter provides the adaptive performance of the system. In doing so, the following principles and requirements must be achieved: solving the symbol grounding and the frame problems [19, 20]; recording and delivering information, information transformation, and compositionality [21]; modeling, validation, and simulation; providing goals, motives, and necessities; knowledge representation and processing; behavioral mechanisms; ability to learn; distributed problem solving and language capability [22].
The DAGHT architecture above comprises an extended perception-action cycle (PAC) [23, 24] that is depicted in Figure 2. A PAC often forms the core of an embodied and enactive cognitive dynamic system [14]. It describes the interaction of a cognitive agent with a dynamically changing environment. The agent is equipped with sensors for the perception of its current state in the environment and with actuators allowing for active state changes. A central behavior control prescribes goals and strategies for problem solving. Here, the psychological categories of “want”, “should”, and “allow” are realized by the cognitive control of the Demiurge from Figure 1.
Figure 2. Double cognitive loop using an embedded relational data model. The inner cognitive loop corresponds to the interaction between the agent and the environment (non-verbal information exchange). The outer cognitive loop is used for the communication between a speech-enabled agent and any other cognitive agent or natural language user (verbal information exchange). The flow of information between the behavioral control and the relational data model is embedded in a simulation loop and enables the simulation of actions as well as the prediction of effects. Since the simulation loop is not linked to the environment through perception and action, this loop is not referred to as a PAC. The exchange of information between two systems with different cognitive abilities (e.g., between humans and cognitive agents) is denoted as Inter-Cognitive Communication [25].
In Shannon's original mouse-maze system, the motors are the actuators pulling the mouse along a path until it hits a wall which is registered by the corresponding sensors. These perceptions have been stored in a relay array to be able for avoiding the corresponding actions subsequently. In our DAGHT architecture, the Demiurge prescribes a certain maze cell where the agent could find a “piece of cheese”, a “glass of water”, or even “a rind of bacon” as possible goals. When one goal is eventually reached, no further action is required.
Since complex cognitive systems are characterized by a high degree of autonomy, it is obvious with regard to the learning ability of such systems, to focus on those learning methods where a “teacher” is not demanded. This includes associative learning algorithms, such as reinforcement learning [14], which are based on environmental models and in which a relationship between the predictions of the model and the observations, in reality, must be established. The basic structure of cognitive information processing is shown in Figure 2. It is supported by model-based knowledge processing, with which representational data structures can be processed, either mathematically or logically. As such models construction plans of the system as, well as dynamic, state-space-based models of the environment, can be incorporated. In the following, we describe the extended PAC of Figure 2 in more detail.
Simulation loop. After modeling and validation, the cognitive system has a simulation-capable model in which the representational data structures can be processed informationally closed. This approach can be used, for example, when planning optimal control sequences or when solving problems. These tasks are solved through the information processing in simulation loop2. Here, the query and response of the knowledge model alternate cyclically, but there is no relation to the signal level. Thus the simulation loop is not denoted as a PAC. Of course, the existential risk of damage from this inward directed behavior is negligible. However, since the existence of a cognitive system—in the sense of a final system—is dependent on a selection principle, any model-based solution needs to be related to reality.
Interaction loop. The agent causes mechanical movements or changes in physical quantities and also records information or evidence on the existence of objects or their properties. This information is required for comparison with reality. For this reason, a compound system is required, which enables a logical interconnection between comparative information (model knowledge) and measurement information (current observations, evidence). Only through this comparison facility, a model-based system was able to decide “What is the case?” or “What does this observation mean?” or whether an adjustment of the model is required. In case of uncertain information, a probabilistic description is mandatory, so that a mathematical statistical interconnection between comparative information and measurement information may be established (e.g., via Bayesian modeling). Only when facts are available, or the most probable state of the world has been determined, the system can select the next steps to achieve the specified goal. Hence, all perceptions, behavior-relevant decisions, and actions that affect the physical environment are processed via the inner PAC. According to Figure 2 these processes are realized by the interaction loop.
Communication loop. Representational data structures are used already for problem solving and productive thinking. In communication processes, these data structures play an even more prominent role. In order to realize communication processes between cognitive systems the external PAC is required, which is referred to as a communication loop. With this kind of coupling, statements about the world can be obtained by means of natural language expressions, which can be transformed logically and added to the existing knowledge. On the other hand, derived statements from logical expressions can also serve as evidence for observed facts [26].
Compound system. By linking the interaction loop with the simulation loop, we have shown above, how observations of the environment can be mapped to possible categories of the environment model. In this way, the agent arrives at ideas about “What is the case in the world”. This approach corresponds to the concept of meaning in a truth-functional sense. In addition, a connection between communication and interaction loops can be realized. This is particularly advantageous when several interpretations of linguistic input are possible. In this case, the sensory input of the interaction loop can be used as context, such that input ambiguities can be resolved. The linking of both loops means that behavioral control is embedded in a double cognitive loop. In order to implement this embedding technically, it has to be clarified which requirements must be fulfilled so that it can be implemented as a logical compound system. In [27], we proposed that the information from both loops must be available in a form that they can be logically processed and compared with one another. For this purpose, statements in all information loops must be expressed in a formal language and thus transformed and processed in a coherent manner [26].
As pointed out above, an important task of cognitive agents is to relate their subjective inner worlds to the objective outer world. On the one hand, things of the external world exist independently of observations. This includes the “being” of entities and their attributes as well as the structural relationships between them [7]. The philosophical discipline of ontology deals with ordering principles of being and with the basic structures of reality. On the other hand, there are those structures with which the phenomena of reality are formally described. The philosophical discipline dealing with the formal description of knowledge is called epistemology. This notably includes terms with which “representations of being” are formally available. However, a cognitive agent can only acquire formal knowledge through experiential analysis and observation, and this requires the informational coupling of the agent with its environment according to Figure 2. The information processing of a cognitive agent is therefore characterized by the formation of representations and the logical operations upon them. In this context, object classification and ontology inference are two prominent tasks that can be used to capture the basic structures of reality and which are discussed in this study.
1.1.1. Classification
Within our DAGHT architecture, object classification belongs to the behavior programs of Argus, which is based on typical object features and their values of measurement. Classification is required for conceptualization and for grounding symbols in perceptions. A cognitive dynamical system may be equipped with different sensors, e.g., for taste, smell, or color that provide a high-dimensional continuum, often called the observation space [28]. In classical AI applications, standard ML techniques such as K-means clustering, expectation maximization, Voronoi tessellation, or decision trees are utilized to obtain a partitioning of these observation spaces [7].
Before, an exploration phase is required in which the modeling (e.g., by the building of decision trees) using some training data takes place. Based on these models, the target objects can be classified and assessed with regard to their usefulness for the satisfaction of needs.
Of course, the prediction of a classification model is typically not completely correct. Instead, the goal is to find a classification model with high accuracy. A high number of correct training objects is essential for achieving high accuracy. The typically low number of training objects from an exploration phase leads to an initial classification model with poor accuracy. For improving the classification model, the set of training objects should evolve and enlarge after the exploration phase. That means, it must be possible to check the correctness of performed classification predictions and to use that information for refining and enlarging the set of training objects. Based on that, the classification model needs refinement too. Thus, learning a classification model within the DAGHT architecture is an ongoing process of reinforced learning [14].
1.1.2. Ontologies
An exploration phase not only precedes the object assessment but also the finding of target objects. For this, relations between objects, such as the location of target objects can be found using the ontology inference method. The corresponding exploration program based on the associated training data is part of behavior programs of Theseus.
The term ontology is widely used in the context of artificial intelligence [7] where it is often understood as synonymous with “semantic models” [29] or “conceptual systems” [30]. Although it is acknowledged that the term has its origin in philosophy [12, 31] it is used in a much narrower sense as “specification of a conceptualization” [31], where a conceptualization is understood as a triple of a universe of discourse with functions and relations. The universe of discourse contains the objects “hypothesized to exist” [32], which can be properties or reified relations [32]. Where it is totally clear that “[f]or knowledge-based systems, what ‘exists' is exactly that which can be represented” [31]. Although this last notion links to radical constructivism incorporating notions from Kant and Piaget (cf. [10]) the a priori of knowledge-based systems are seldom discussed. As an exception [30] states, “Piaget insisted that concepts must arise out of activities — because simple action schemas develop before conceptual thought […]”3 indirectly relating to epistemology.
In computer science—following the tradition of languages, compilers, and tools—an ontology is what can be specified by an ontology language of which the web ontology language (OWL) [35, 36] is by far the most prominent one together with its tools Protégé [37] and WebProtégé [38]. And although much work is done to derive ontologies from text [e.g., 39] they are usually hand-crafted by “highly-trained knowledge engineers” [30] for which Project Cyc [40] is an extreme example where since 1984 a huge knowledge base is assembled using the project's own language CycL [41]. The knowledge base should be 105 or 106 times bigger than what is necessary for small expert systems [41]. But it remains that ontologies define the vocabulary of a domain and therefore use natural language as its basis [13, 42] circumventing the symbol grounding problem [19].
Picking-up on Piaget's notion above he also states that the construction of knowledge consists of the alternation of assimilation and accommodation where the data of experience is incorporated into existing schemata constantly modifying them [43]. This idea is summarized by [30]: “Concepts and activities bootstrap each other.”
Linking this section back to the previous one it must be said that finding objects and finding relations between objects clearly depend on each other [cf. 44] which relates cognitive agents to bidirectional hierarchical systems [45, 46]. In the same manner knowledge and its construction are inseparable [cf. 44] and the former is determined by the latter.
1.2. Quantum-inspired cognitive agents
In cognitive science and psychology several pertinent puzzles of decision theory, such as the Ellsberg and the Allais paradoxes, the conjunction or the disjunction fallacy, or even questionnaire ordering effects were not solvable by means of classical statistical modeling, such as Kolmogorovian probability theory or Markov chains [47–50]. In order to cope with these problems, psychologists have developed alternative approaches such as prospect theory, [47] or the bounded rationality program [50, 51]. In the latter framework, a cognitive agent under environmental pressure does not have all possible cognitive resources available to select the most rational decision. Instead, only a limited number of heuristic cues are evaluated according to a take-the-best strategy. Notably, during the last two decades, experts from mathematical psychology and theoretical physics have realized in a common effort, that the human mind can be treated as an abstract quantum system (refer to e.g., [48, 49] and references therein). This quantum cognition approach rests essentially on projective geometry as employed in the Hilbert space formalism of quantum physics. Here, a mental state of a cognitive agent is prescribed by a state vector in such a Hilbert space that is transformed into a subsequent state vector by the application of mental operators, describing logical propositions or cognitive processes. If those operators do not commute with each other, the ordering of their successive applications matters, and different results may occur e.g., in logical conjunction or a questionnaire.
It has been emphasized by different researchers (e.g., in the open-peer commentary of [49]) that the geometric structure of the quantum cognition approach can be implemented by the linear structures of neural networks, without necessarily claiming that biological brains must be physical quantum computers [15]. Since vector spaces are not only employed in biological or artificial neural networks as investigated in computational neuroscience or artificial intelligence, respectively, the underlying mathematics of linear and abstract algebras is a promising framework for the integration of related approaches into quantum-inspired (or neuro-inspired) cognitive agents. Further examples of such attempts are conceptual spaces, latent semantic analysis (LSA), geometric information retrieval, or vector symbolic architectures [8, 52–56].
Looking at the logical structure of Hilbert space quantum theory, it turned out that quantum logic refers to non-Boolean, yet partially Boolean lattices [28, 57, 58] instead of the Boolean lattices underlying classical logics and probability theory. Therefore, the Geneva school of quantum physics investigated the representation theory of modular and orthomodular lattices upon Hilbert spaces. Interestingly, using the methods of operational statistics [59–61], Blutner and beim Graben [50] have recently shown how bounded rationality and quantum cognition can be conceptually integrated into a common framework, where a single cue of a take-the-best strategy corresponds to one particular operational perspective upon a complex decision situation where limited cognitive resources hence enforce partially Boolean descriptions. This finding could also be of significance for the frame problem in cognitive science [20], where an agent has to select the most relevant frame among different incompatible Boolean descriptions.
Finally, quantum-inspired geometric approaches suggest a revision of the controversially interpreted representation concept [6, 9–11] in the sense of algebraic representation theory [62]. This alternative interpretation is due to Skinner's ABC scheme [63] where (verbal) behavior (B) acts as a function on an antecedent state (A), mapping it to a consequent state C in the belief state space of a cognitive agent. If acts of behavior possess some algebraic structure (which is the case for a word semigroup, for example), this structure remains preserved in a representation of B through operators acting upon a suitably chosen geometric space for antecedent A and consequent C states. This idea has been formalized within the framework of dynamic semantics in such a way that the meaning of an utterance is simply its operator impact on the agent's belief space [64, 65]. Similar attempts have been made for vector symbolic architectures and hyperdimensional computation [15, 54, 55]. Accordingly, an operator representation for the word semigroup acting upon the state space of a neural automaton could be constructed [66, 67], while syntactic term algebras for minimalist grammars and context-free grammars lead to Fock space representations [56, 68], as used in quantum field theory [69].
The remainder of the article is organized as follows. In Section 2, we address the problem statement and the experimental environment of our cognitive agent system which is essentially a quantum-inspired version of Shannon's original mouse-maze setup. Subsequently, in Section 3, we summarize the required mathematical concepts and the notations being used. Then, we introduce the novel idea of a quantum-inspired perception-action cycle as the basis of our main contributions in Section 4. In the following Section 5, we come to the first focus of our study and describe quantum-inspired classifiers. In Section 6, we proceed with our attempt at quantum-inspired ontology inference. The paper concludes with a summary and a discussion of the achieved results in Section 7.
2. Problem statement and experimental setup
According to the historical background, we consider the mouse-maze problem, where an artificial mouse lives in a simple N × M maze world that is given by a certain configuration of walls. For the mouse to survive, one or more target objects are located at some places in the maze. In our setup, we are using the target object types cheese (C), bacon (B), and a glass of water (G) to satisfy the primary needs of hunger and thirst. These object types are defined symbolically by the set where NOB refers to no object at all. The agent is able to move around the maze and perceive information about its environment via physical interaction. This is organized by the inner perception-action cycle (interaction-loop, Figure 2) that allows the agent to navigate to these target objects. To this end, the agent needs to measure its current position (x, y) by two sensors, where x ∈ X = {1, ..., N} and y ∈ Y = {1, ..., M} apply. It also has to determine the presence or absence of target objects by an object classifier, which we consider at first as a single complex sensor. Then the current situation can be described on the basis of the measurement result for the current position and the result of object classification. Subsequently, the measurement information needs to be encoded as a string of symbols and has to be translated into a representational data structure saved in a knowledge base. This kind of knowledge (“situations”) should be stored as the result of an exploration phase if no logical contradictions occur. A further kind of knowledge is the set of movements in the maze. These “movements” are based on permissible actions , which initially correspond to the four geographic directions, north (N), south (S), west (W), and east (E), that are defined symbolically by the set (NOP denoting no operation here). Each action starts at a position z = (x, y) and ends at a position z′ = (x′, y′). For this purpose, the actuators associated with the x- or y- direction respectively, can be incremented or decremented by one step. Hence, to establish the parameterization of the four geographic directions we define the sets ΔX = ΔY = {−1, 0, 1}. Thus, the south action is parameterized, for example, by the ordered pair (0, −1). Note that with the knowledge about “movements” the agent's behavior can be described in the sense of “causality”. From a technical point of view, the relationship between a cause (z = (x, y), a) and an effect (z′ = (x′, y′)) can be expressed, e.g., by the transition equation of a finite state automaton. The key role in cognitive systems comes from the knowledge model. Building and using such a model is based on the ability to exchange information with the environment by communication or interaction. We assume that all cognitive activities correspond to manipulations of the knowledge model. This model represents the physical world through a network of semantic objects—described by measurable attributes—that are related to each other. Objects, attributes, and relationships are the information of interest to which both interaction and communication refer. However, in order to be able to solve problems through planning and goal-oriented behavior, representations are required that can be built, processed, and stored in the knowledge model. In this context, object classification and ontology inference are two prominent tasks that can be used to build representations. Classification of an unknown target object in the mouse-maze, such as cheese, water, and bacon, is based on sensory data that measure characteristics like odor, color, shape, or nature. Boolean logic classifiers do work on a symbolic level but are not adequate for dealing with continuous data. Therefore, we investigate a quantum-logic-inspired classifier in comparison to Boolean-logic-based classifiers. Furthermore, as a result of performing actions and collecting sensory data during perception, an ontology of the physical world should emerge. Hence, we study whether ontology inference can be achieved during maze exploration using an iterative quantum-inspired method.
3. Preliminaries
In the following, we present the basics of algebraic quantum theory and quantum logic.
3.1. Algebraic quantum theory
In contrast to the purely epistemological formalization that von Neumann [70] delivered in his statistical Hilbert space quantum theory, algebraic quantum theory [57, 69, 71] allows the clear distinction between ontic and epistemic descriptions of arbitrary quantum systems [72]. Both levels of descriptions rely upon particularly chosen observable algebras, called C*-algebras which are basically non-commutative generalizations of the complex number field ℂ. The field of complex numbers is first of all a two-dimensional vector space over the field of real numbers, which means that addition and real scalar-multiplication are well-defined. Moreover, ℂ is also an algebra as an outer product of complex numbers is given, producing other complex numbers. The field ℂ is equipped with a norm (the modulus) and it is complete with respect to this norm, making it a Banach algebra. Therefore, infinite power series such as (holomorphic) Taylor series and exponential functions are convergent in ℂ. In ℂ an involution is defined, yielding the complex conjugate of any number. Finally, ℂ exhibits the so-called C*-property, relating the norm to the product of a complex number with its own conjugate. Yet, because multiplication is commutative in ℂ, it becomes an abelian C*-algebra in contrast to those required for quantum systems.
A quantum observable algebra contains operators that can be linearly combined and also sequentially composed to produce new operators. As a result, the commutator of any two operators can be computed. The commutator measures the degree of compatibility of both operators. In the case of two commuting operators, their commutator vanishes and they are called compatible because the order of sequential composition is irrelevant. If the commutator of two operators does not vanish, they are called incompatible, or even complementary. The ontology of a quantum system is then uniquely prescribed by the fundamental commutation relations (FCR) of the underlying observable algebra.
3.1.1. Ontic description
In general, an ontic C*-algebra is a real or complex Banach-*-algebra 𝔄 (i.e., a complete Banach algebra over the real or the complex number fields, ℝ or ℂ, respectively, with involution *:𝔄 → 𝔄) satisfying the C*-property, ||AA*|| = ||A*A|| = ||A||2, for all A∈𝔄 [69, 71], where A* is the adjoined observable obtained from the involution A* = *(A), and ||·|| denotes the norm on 𝔄. An element A ∈ 𝔄 is dubbed Hermitian4 when A = A* and unitary when A* = A−1. Another important concept is idempotence which holds when A2 = A. Hermitian and idempotent observables are called projectors.
For two elements A, B ∈ 𝔄, the commutator is defined through
Clearly, [A, A] = 0 for any A ∈ 𝔄. The commutator is alternating (antisymmetric), linear in both arguments, and obeys the product rule
If 𝔄 = span({Ak ∣ k ∈ ℕ}) with a (countable) basis of observables Ak ∈ 𝔄, the fundamental commutation relations of the algebra are given by the linear combinations
The coefficients aijk ∈ ℂ are called the structure constants of algebra 𝔄. In high energy physics, e.g., the structure constants of the particle (flavor) algebras reflect the possible scattering results in a particle collider.
When the observable algebra of a quantum system contains a maximal abelian subalgebra of hermitian operators, this subalgebra provides the intrinsic properties of the system, i.e., all properties that could be simultaneously measured during observation. In the hydrogen atom, e.g., intrinsic properties of the electron are energy, absolute angular momentum, (only) one projection of angular momentum, and a projection of the electron spin. Altogether, the eigenvalues of those simultaneously measurable observables deliver the basic quantum numbers of atomic physics and quantum chemistry.
Examples
Consider the unital complex C*-algebra that is generated by three Hermitian observables E, P, Q, with E as the algebra's identity.5 If the FCR appears as
the given algebra is known as the Heisenberg algebra 𝔥 of non-relativistic quantum mechanics. Then, Q is interpreted as the position observable while P receives the interpretation of canonically conjugated momentum. The algebra 𝔥 contains two maximal commutative subalgebras, span(E, Q) and span(E, P) such that either position or momentum can be regarded as the ontologically intrinsic properties, but not both together. This is excluded by the last commutation relation (6), expressing the complementarity of position and momentum in quantum mechanics in contrast to classical mechanics where both position and momentum are intrinsic properties that together span the particle's phase space.
Another important C*-algebra is the real spin Lie algebra 𝔰𝔲(2).6 It can be generated by three observables X+, X−, X. These observables are introduced in such a way that X is Hermitian, whereas X+ and X− are mutually adjoined to each other: . The FCR of 𝔰𝔲(2) is given as
such that its structure constants are +1, −1, and 2 in the given realization. In 𝔰𝔲(2), there is a Hermitian element called the Casimir operator, that commutes with any other element. For example, one easily sees that
utilizing the commutation relations (7) and (8) in combination with the commutator's antisymmetry, its linearity, and its product rule (2). Thus, the ontologically intrinsic properties could be suitably chosen as C and X here, because span(C, X) is a maximal abelian subalgebra.
3.1.2. Epistemic description
In quantum theory, epistemic descriptions are necessarily probabilistic. Hence, they require the crucial concept of a state which belongs to a state space that is given as a Hilbert space in the canonical codification of von Neumann [70]. In algebraic quantum theory, by contrast, states are introduced as the positive, normalized linear functionals over a C*-algebra 𝔄 [69, 71]. Therefore, the state space is part of the dual space 𝔄* regarded as the Banach space of linear forms ρ:𝔄 → ℂ, such that ρ(A) = a ∈ ℂ is the statistical expectation value of observable A in state ρ. Formally, one could determine the dual space of the state space as well, that is the bidual 𝔄**. This space also turns out as a C*-algebra in which the original C*-algebra of ontologically intrinsic observables 𝔄 is canonically embedded through Â(ρ) = ρ(A) with  ∈ 𝔄** and A ∈ 𝔄. Hence, the distinction between  and A can be neglected. The bidual 𝔄** is a so-called W*-algebra which is a C*-algebra possessing a Banach space as a predual. Clearly, the bidual 𝔄**, which is the dual space of 𝔄*, has 𝔄* as its predual which is a Banach space as state space [71]. In general, the bidual 𝔄** is much larger than the original algebra 𝔄, containing 𝔄 as a proper subspace. In particular, it may contain many epistemically emergent observables that are absent at the ontic level of description [72].
Besides this rather abstract reasoning, there is a constructive procedure for obtaining an epistemic W*-algebra from its underlying ontic C*-algebra, called the Gel'fand-Naimark-Seagal (GNS) construction [69]. In order to achieve this, only one distinguished reference state ρ is sufficient to construe a Hilbert space and a representation7 πρ(𝔄) of the C*-algebra 𝔄 as a W*-algebra of bounded operators over . The vectors of are conventionally written as Dirac kets |ψ〉 while linear forms of the dual space are written as Dirac bras 〈ψ|, such that an expectation value functional ψ, which is a pure state in the state space, is represented by ψ(A) = 〈ψ|πρ(A)|ψ〉 ∈ ℂ [73].
Examples
The Schrödinger representation of the Heisenberg algebra 𝔥 above is given by Hermitian operators acting on the Hilbert space8 of square-integrable functions over the reals, for a quantum system with one degree of freedom where a particle can freely move around one spatial dimension. This representation is obtained by the operators π(E) = 1, π(Q) = x, and π(P) = −i∂x. The representation π is a C*-algebra homomorphism preserving the FCR
since [π(Q), π(E)]ψ(x) = [x, 1]ψ(x) = 0, [π(P), π(E)]ψ(x) = [−i∂x, 1]ψ(x) = 0, yet
with ψ(x) ∈ L2(ℝ) as the particle's Schrödinger wavefunction.
However, the image π(𝔥) is not the desired W*-algebra of epistemic observables as the position and momentum operators x and −i∂x are not bounded on the Hilbert space . Therefore, one has to consider them as the infinitesimal generators of the fundamental modulation and translation symmetries of the system by introducing their exponential functions exp(ipx) and exp(−q∂x) contained in the W*-algebra of bounded operators over . Here, p, q are real parameters of the corresponding Weyl transformations [74].
For the spin Lie algebra 𝔰𝔲(2) above, the GNS representation yields the important qubit representation with Hilbert space [75]. Introducing the basis vectors
the generators of 𝔰𝔲(2) become represented by linear combinations of the Pauli matrices
such that the FCR are preserved under the representation
Looking at (15), e. g., yields
which proves the first FCR (15).
In the following, we assume a (separable) Hilbert space with a (countable) complete orthonormal basis (where ℕ could be deliberately identified with ℕ0). Then the orthonormality relations
with Kronecker delta δik = 0(1) for i ≠ k(i = k) hold. For each basis vector |k〉, one construes a one-dimensional atomar projector Pk = |k〉〈k| such that
i.e., |k〉〈k| annihilates all basis vectors orthogonal to |k〉. Completeness of the basis is then expressed by the requirement
where 1 denotes the unit operator of the unital W*-algebra . The orthocomplement P⊥ of a projector P is defined as P⊥ = 1 − P.
For an arbitrary epistemic observable , we may find a normalized eigenstate , obeying the eigenvalue equation
with scalar eigenvalue a ∈ ℂ. The eigenvalues of A are the solutions to the characteristic equation
i.e., |a〉 ∈ ker(A − a1) where the kernel ker(A − a1) is the null space of the linear operator A − a1, namely the Hilbert subspace that is annihilated by A − a1. This subspace is called the eigenspace of the operator A for eigenvalue a, shortly . The solution set of eigenvalues of an operator is called its spectrum SpecA = {a ∈ ℂ∣A|a〉 = a|a〉}. In a Hilbert space representation, all intrinsic properties of an observable algebra are given as compatible operators that are simultaneously diagonalizable, which means that their eigenvalue equations can be simultaneously solved.
If A is Hermitian, its eigenvalues are real numbers, for
hence, a = a* with a* as the complex conjugate of a ∈ ℂ, and thus, a ∈ ℝ. Moreover, if A is a projector, its eigenvalues can only be 0 or 1, since
therefore, a2 − a = a(a − 1) = 0.
The eigenstates of a Hermitian operator A provide an orthonormal basis of the Hilbert space such that
as the inner direct sum of eigenspaces. Assigning one projector Pa to each eigenspace Eiga(A) yields the crucial decomposition of any Hermitian observable into its orthogonal projections
according to the famous spectral theorem [70]. If all projectors are atomar, one obtains
Examples
In the Schrödinger representation of the Heisenberg algebra 𝔥, all Schrödinger wave functions in the Hilbert space L2(ℝ) over one-dimensional configuration space ℝ are eigenstates of the position operator, on the one hand, xψ(x) = xψ(x) but not simultaneous eigenstates of the momentum operator for any eigenvalue p ∈ ℝ. On the other hand, eigenstates of the momentum operator are the harmonic wave functions ψp(x) = exp(ipx), because
with eigenvalue p. The fact that a wave function ψ(x) cannot simultaneously be a common eigenstate of both position and momentum operator, reflects the intrinsic complementarity of both observables as reflected by the FCR (6) and (12). As an effect, position and momentum operators appear as Fourier pairs in the Schrödinger wave function representation.
The qubit representation by means of the Pauli matrices above is only the most simple non-trivial W* representation of the spin algebra 𝔰𝔲(2). Its ontologically intrinsic properties are given by the Hermitian Casimir operator C and the operator X, the so-called highest weight module representations of 𝔰𝔲(2) are constructed in the n = 2j + 1-dimensional Hilbert space where the “highest weight” j ∈ ℕ is related to the eigenvalue of the Casimir operator.9
In addition, m ∈ ℤ is the eigenvalue of the operator X, such that
under the constraint −j ≤ m ≤ j. The eigenstates |jm〉 are indicated by the two simultaneously measurable quantum numbers j, m which are related to the eigenvalues of the observables C, and X, respectively.
In the n-dimensional representation under consideration, the other two operators obey
Therefore, these are called either ladder or step operators. Interestingly, the states |jj〉 and |j − j〉 are annihilated by the operators X±, respectively:
and
thereby justifying the notion of “highest weight module representation” [75] where the qubit representation above corresponds to the weight j = 1/2.
Finally, we have to introduce the tensor product of two or more Hilbert spaces. A quantum system that possesses either several degrees of freedom or that consists of several subsystems, each described by one Hilbert space for k ∈ K ⊆ ℕ, is represented by the tensor product of those Hilbert spaces. For K = {1, 2}, we have then
and in general
The states of this tensor product space are written in the Dirac notation as
with and , where the last convention is the most convenient one. Correspondingly, operators A1, B2 acting in only one of the factor spaces , and , respectively, are embedded into the product space (28) through
3.2. Quantum logics
The subset of all projectors of the unital W*-subalgebra 𝔐 of becomes an orthomodular lattice [57] with respect to the partial ordering P1 ≾ P2 if
Two projectors are orthogonal, if P1P2 = 010. Clearly, the orthocomplement P⊥ of a projector P is orthogonal to P as
The minimum and maximum of the lattice are given by the projectors 0 and 1, respectively [28]. A maximal set of pairwise orthogonal atomar projectors generates as a distributive sublattice of where meet and join are given through the algebra product and addition as
for all . For orthogonal projectors , Equation (34) reduces to P ∨ Q = P + Q. With orthocomplement as negation becomes a Boolean lattice and every possible defines a Boolean block in .
For some observable , we define its domain projector (or shortly dominator)11 as the orthocomplement of its kernel projector, i.e.,
Orthomodular lattices form the basics of quantum logics with conjunction (meet), disjunction (join), and negation (orthocomplement) connectives [28, 77].
Example
The projector lattice of the qubit spin algebra 𝔰𝔲(2) is given by the matrices
The projectors P, Q, and R indicate spin measurements in positive x, y, and z-direction, respectively. The Hasse diagram of the resulting non-distributive but modular lattice structure, shown in Figure 3 is known as the “Chinese lantern” [58]. Note that each P, Q, or R, together with its respective complement, P⊥, Q⊥, or R⊥ define a distinct Boolean sublattice (comprising 0 and 1 as well), while the entire lattice is not Boolean at all.
Figure 3. Hasse diagram of the 𝔰𝔲(2) projector lattice, known as the non-distributive but modular “Chinese lantern” lattice.
4. Quantum-inspired perception-action cycle
In this section, we propose a novel kind of cognitive dynamical system, called quantum-inspired perception-action cycle (QPAC). The starting point of this innovation is a classical PAC as described in Section 1.1. The PAC comprises actuators to modify the world state of the cognitive agent and sensors to experience the current state of its world. Now, we assign to each actuator of the PAC an operator of an ontic C*-algebra which we call an action operator, or briefly actor. Similarly, each sensor is associated with a Hermitian operator, called perception operator, or shortly perceptor. These operators span together the ontic C*-algebra 𝔄 of the agent such that their fundamental commutation relations provide a complete description of the system's ontology.
The ontic description refers to the intrinsic nature of a system. By contrast, an epistemic description applies to the knowledge that an observer bears about a system under study [72]. In the case of a cognitive system, an agent may be its own observer. In the framework of the QPAC, we are interested in such autoepistemic descriptions [50] that are iteratively inferred through the exploration behavior of the agent. In Section 1.1, we have emphasized the particular role of the simulation loop within the extended PAC. Since simulation always takes place within the subjective mental model that the agent constructs about the objective external world, we adopt the corresponding idea from the quantum cognition approach where cognitive operators are only applied to mental states from a representational Hilbert space.
Here, we first consider Shannon's artificial mouse “Theseus” [5, 78] restricted to a one-dimensional maze of size N. In an epistemic quantum model, each site x (1 ≤ x ≤ N) is associated with vector |x〉 in N-dimensional Hilbert space such that all |x〉 form an orthonormal basis: . The mouse performs its intrinsic PAC which—in this most simple scenario—is delimited to measuring its actual position x (perception) and walking either one place to the right: x ↦ x + 1, or one place to the left: x ↦ x − 1. In the QPAC, we assume the existence of one Hermitian perceptor, , on the one hand, which we refer to as the position observable from the W*-algebra . On the other hand, actions can be characterized as non-Hermitian operators from . In our model, we assume the existence of two mutually adjoined actors X+, X− such that , , called step operators in the sequel. Note that we do not assume that these operators are necessarily unitary. Their relationship to the step operators of the highest weight module representations of 𝔰𝔲(2) (discussed in Section 3.1) becomes clarified later.
The impact of the operators X, X+, X− upon a Hilbert space state |x〉 is defined as follows:
Equation (36) is an eigenvalue equation, saying that state |x〉 is an eigenstate of the perceptor X with eigenvalue x, which is the actual measurement result. Equations (37) and (38) define the general action of the step operators where scaling functions f+(x) and f−(x) are introduced for proper normalization.
Next, we compute the commutators (1) of the perceptor X and the actors X±. We first determine the commutator [X, X+]. Applying the operator product XX+ to some state |x〉 yields
For the reversed order, we get
Hence, we obtain the difference
and therefore
as the first commutator.
Similarly, we compute the commutator [X, X−] as follows:
and for the reversed order:
yielding
Thus, the second commutator is
Interestingly, Equations (39) and (40) are already those of the spin Lie algebra 𝔰𝔲(2) [75]. Thus, it is tempting to calculate another crucial commutator [X+, X−] next. We carry out this calculation in a slightly different manner, by computing the two expectation values of X+X− and X−X+, respectively. The first one gives
For the second one, we obtain
The expectation value of the commutator is then
such that
with
In order to proceed, we consider the following four cases.
1. g(x) = 0 for all x. This is the case when the squared modules in (42) are identic. Then, X+ and X− commute and they are also unitary. Thus, going either one step to the right and afterward, one step to the left or vice versa results always in applying the identity operator 1, such that all step operations are reversible. The most simple choice for the scaling functions, f+(x) = f−(x) = 1, yields then
which requires the maze to be infinite, N = ∞, and without any obstacles. Therefore, we conclude that a commutative subalgebra of step operators {X+, X−} leads to classical (Newtonian) infinite space.
2. g(x) = ax is linear in x. Then, we can write
In this case, we obtain the commutator [X+, X−] = aX. In particular, a = 2 renders the spin algebra 𝔰𝔲(2) from Section 3.1.
3. g(x) = a0 + a1x is affine linear in x. Arguing along the same line as above, the commutator becomes a linear combination [X+, X−] = a01 + a1X of the identity operator and the position observable.
4. g(x) can be developed into a power series of x. Now, we may generalize the findings above, by replacing the position measurement x through its observable X, thereby obtaining the operator power series g(X) that converges in the Banach-*-algebra . The general commutator is then [X+, X−] = g(X).
Yet, let us go one step back for the moment in order to finally establish the correspondence to angular momentum algebra. To this end, we have to estimate the normalization functions f±(x). We guess them from an affine transformation that relates the size of the labyrinth N with the quantum number j (the “highest weight”) from (25) through N = 2j + 1. Moreover, the quantum number m is associated with the labyrinth position x through x = m + j + 1. Inserting these quantum numbers into (27) yields the estimates
These guesses entail a first important consequence. Computing f−(1) and f+(N), respectively, yields f−(1) = f+(N) = 0, such that
the boundaries of the mentally represented labyrinth at x = 1 and x = N are absorbing for the moving mouse whose state becomes annihilated. Note that this does not mean that the agent is physically destroyed when bouncing into a wall (unless it consists of antimatter) as all operations of the QPAC take place at the agent's internal mental stage within the simulation loop from Section 1.1. Thus, an annihilated mental state can be easily restored from a suitable backup copy if necessary.
Finally, we insert (43) and (44) into the commutator
such that eventually
which renders the last commutator of the spin algebra 𝔰𝔲(2) up to an affine linear transformation. In fact, the particular size of the maze N turns out as the dimension of the highest weight matrix representation of that algebra.
Finally, we consider the most general case
again. The operator power series
may contain several zeros. It is straightforward to interpret those as annihilating walls erected along the spatial dimensions. Because the power series completely depends on its coefficients ak, these encode the structure of the labyrinth algebraically. We may consider them as the maze's structure constants quite in analogy to the theory of Lie algebras in quantum physics.
Let us resume the 𝔰𝔲(2) scaling functions (43) and (44) again. Their squared modules are given as
Factorization yields respectively
and
The zeros of are thus x = 0 and x = N, while those of are x = 1 and x = N + 1, rendering a finite maze with absorbing boundaries at x = 1 and x = N (for x = 0 and x = N + 1 are beyond the spatial range of the labyrinth).
Now it is straightforward to introduce further absorbing obstacles of the agent's mental model into the factorized scaling functions. Let x1 be the position of another obstacle. Then
possess the required zeros:
Accordingly, we obtain
for a sequence of m annihilating obstacles B = {x1, x2, …xm}.
Inserting (52) and (53) into (42) yields a polynomial in x obeying the general commutation relation (48).
The arguments above are straightforwardly generalized toward a maze in two dimensions as follows. Let be the Hilbert space of x-positions and be the Hilbert space of y-positions, then all previous operators can be extended to operators acting on the tensor product space . A vector in this product space is indicated as .
First, we obtain two perceptors (Hermitian observables corresponding to perception): X = Xold ⊗ 1 and Y = 1⊗Xold, where Xold refers to the one-dimensional observable defined in (36). Thus, we obtain two eigenvalue equations
i.e., observing X in state |xy〉 yields the measurement result x which is the eigenvalue of eigenstate |xy〉, while observing Y in state |xy〉 gives the measurement result y, the eigenvalue for eigenstate |xy〉.
Correspondingly, we define four non-Hermitian actors (action operators) X+, X−, Y+, Y− as X+ = X+old ⊗ 1, X− = X-old ⊗ 1, Y+ = 1 ⊗ X+old, and Y− = 1⊗X-old, such that
These step operators mediate transitions to the north: Y+, the east: X+, the south: Y−, and the west: X−. Certainly, the scaling functions f, h become dependent on x and y coordinates now. In the best case, these can be regarded as power series again, whose zeros describe absorbing boundaries or obstacles as above.
The Hermitian position operators X and Y with perception Equations (54) and (55) possess a discrete spectrum of eigenvalues. Additionally, we could assume the existence of further Hermitian operators of the QPAC for smell, color, taste, etc., either with discrete or even with continuous spectra as well. All these perceptors together span a maximal abelian subalgebra of the intrinsic properties of the QPAC. The direct product of the spectra of these operators yields the observation space of the agent's sensory capabilities [28]. Partitioning this observation space into disjoint classes is the first problem treated subsequently in Section 5.
Having already solved the classification task, the QPAC can be straightforwardly simplified by the induction of further eigenvalue equations for observing the presence of cheese, water, bacon (etc):
where the eigenvalues become binary variables then: c = 1 indicates the presence of cheese in state |xy〉, whereas c = 0 indicates the absence of cheese in that state. Similarly, for a glass of water G, etc. Therefore, the perceptors C, G, … are projectors, i.e., idempotent observables, C2 = C, etc.
5. Quantum-logic-inspired classification
Exploring the world and constructing semantic structures from sensor-based observations is an essential task of an agent. The agent is typically confronted with the problem of detecting objects of interest based on observations and preexisting data about known objects. This problem is a classification problem where the goal is to find a connection between an input item, i.e., feature data from observations, and a class representing a specific object. In our mouse example, an input item may contain data from sensors for weight, texture, color, and odor and objects to be detected may be cheese, water, or bacon. In logic-based classification methods, a logical expression establishes that connection, and its evaluation provides a prediction of a class representing the existence of the specific object. A logic-based method, furthermore, allows for an interpretation of the connection. In the sequel, we focus on logic-based classification methods. A prominent method is a decision tree where Boolean logic rules form a tree with leaves expressing the class decisions. Let us assume that an input item corresponds to an element of the observation space [0, 1]n [28] and every dimension corresponds to a property value from a sensor. Then, every single Boolean condition can be seen as an axis-parallel hyperplane separating input items belonging to the specific object from items not belonging to that object. In cases where the class decision line has axis-parallel boundaries the decision tree based on Boolean logic is very successful. However, in other cases, a Boolean logic-based decision tree typically deteriorates.
In this section, we demonstrate that problem using our mouse example. We want to detect bacon based on a sensor value o ∈ [0, 1] for bacon odor and a sensor value c ∈ [0, 1] for bacon color. Let us further assume that preexisting data lead to a class separation line between bacon and non-bacon following the formula o × c = 0.5, refer to the black line in Figure 4. Notice that this class separation line is not axis-parallel. A decision tree based on Boolean logic tries to approximate the separation line by axis-parallel hyperplanes. The resulting decision tree is very complex for that simple example, refer to Figure 5.
Figure 4. Class separation line o × c > 0.5 (black) and separation planes of an approximating decision tree (red).
In the following, we develop a two-class classifier inspired by quantum logic. The evaluation of a quantum logic expression describes the map from an input item to a class. All arithmetic formulas below are derived from concepts of quantum mechanics and quantum logic [79, 80].
Feature values can be represented as normalized state vectors. The combination of feature values to tuples corresponds to normalized state vectors in their tensor product. Given a set of compatible projectors, they generate a Boolean sublattice of the orthomodular lattice of all projectors. Now associate a logical expression to each projector from the given set. Then each projector in the Boolean sublattice also corresponds to a logical expression. Before a logic expression can be evaluated on an input tuple the expression must be normalized. Every logic expression needs to be transformed into a specific normal form by exploiting laws of the Boolean algebra. A logic expression in normal form can then be transformed into an arithmetic expression for evaluation.
For finding a good classifier, we derive a logic expression. Here, we use the fact that every logic expression can be transformed into its disjunctive normal form (DNF), i.e., into a disjunction of minterms. In our approach, we equip minterms with weights [81]. Thus, finding a classifier reduces to finding the best weights for all minterms for some training data.
We consider the classification task in the following form. Given a training set
with and yi ∈ {0, 1} for each index i ∈ I, where I denote an appropriate index set. Then the task is to construct a predictor mapping
with two potentially conflicting properties:
1. It should generalize from the training set appropriately and avoid over-fitting.
2. Restricted on the training data , it should reproduce the values p(xi) = yi for as many indices i ∈ I as possible.
For achieving the first property, we take motivation from classical logic and refine it with quantum logic.
The disjunctive normal form (DNF) of a logical expression on n atomic propositions a1, …, an is, by definition, the disjunction of a selection of minterms of the form
Hence, the DNF of a logical expression is the disjunction of at most 2n terms of type (53).
Each minterm can be represented by an n-digit binary expression d1 ⋯ dn, where
Using the binary expressions as binary codes for numbers j, we associate to each minterm an index j ∈ {0, …, 2n − 1} in the following way:
The transition to quantum logic is best illustrated by the evaluation of logical expressions. We denote the evaluation of a proposition a by ⟦a⟧. Classical two-valued logic gives ⟦a⟧ = 1 if a is true and ⟦a⟧ = 0 if a is false. When using quantum logic, the evaluation of a logical proposition is a real number in [0, 1]. Here, we use an evaluation of weighted logical expressions based on CQQL (commuting quantum query language). The method is described in [79–81]—what we need here are the following rules:
1. To each logical atom a we associate an evaluation ⟦a⟧ ∈ [0, 1].
2. The negation of a logical expression a evaluates to ⟦¬a⟧ = 1 − ⟦a⟧.
3. A minterm evaluates the product: .
4. A disjunction of minterms evaluates the sum of the minterm evaluations.
Given an input item x = (x1, …, xn) ∈ [0, 1]n, assume that ak(x) is a logical expression which evaluates to xk, i.e., . Set
Using the above rules leads to
where
Note that .
Now define a threshold function τρ, depending on a parameter ρ ∈ [0, 1], as follows:
Our predictor map uses, in addition, parameters , which represent minterm weights. The formula is:
The question is how good the prediction reproduces the training data, i.e., whether pi: = p(xi) = yi, or pi ≠ yi, for i ∈ {1, …, n}. The total accuracy is, by definition, the total number of correct classifications. Hence, our task is to determine a parameter set such that the total accuracy of training data
is maximal. As the terms (1 − yi) do not depend on ρ and λi, our task is equivalent to maximizing . In order to get a more handsome formula, we use the notation I0 : = {i ∈ I : yi = 0} and I1 : = {i ∈ I : yi = 1}. This allows us to write I as a disjoint union I = I0 ∪ I1. Now our task is to maximize
This means we are looking for
Next, we investigate the impact of a fixed minterm weight λj0. For simplicity, we denote the evaluation of the j-th minterm on a training point xi by
Moreover, given weights and an index , we set
In order to see what happens when λj0 runs from 0 to 1, we say that the line
belongs to (i, j0). Some example lines with index j0 are depicted in Figure 6. Lines belonging to indices i ∈ I1 are drawn in black, and lines belonging to i ∈ I0 are red. The interpretation is as follows: If, at some weight , a black line is below a red line, and if it is possible to choose such that the two lines intersect between and , then accuracy will be improved if is replaced by . This is the idea behind our algorithms.
The next step is to derive formulae for use in our algorithms. To this end, assume . If , then the two lines belonging to (i, j0) and (ℓ, j0) intersect at an interior point of the unit interval with coordinate
The basic idea of our algorithms is to choose minterm weights that produce a ranking where training data items of the index set I1 are higher ranked than items of I0. We suggest an approximation approach. That is, starting from random weights we modify the weights step by step as long as they improve the ranking, refer to Algorithm 1. Every single weight is updated in Algorithm 2 if a new value means an improvement. The algorithm counts how many I1 training data items change the rank position with I0 items and then decides if the new weight is really an improvement.
In order to find good candidates for a new minterm weight to be checked we search in Algorithm 3 for a new weight. That is, we identify minterm weight candidates which cause an item swap. Choosing a new minterm weight from the candidate by random is given in Algorithm 4. Instead, an alternative approach would be to select the weight candidate which is nearest to the current weight.
Algorithm 3. Generates new minterm weight next to λj by checking changes of the order if λj would be 0 (left side) or 1 (right side). Epsilon should be small enough, an alternative approach would be to choose the middle point between the next and the second change of the order.
A new weight candidate can be smaller (testing to the left) or bigger (testing to the right) than the current minterm value. In order to find it, we check swaps on the λ = 0 and λ = 1 line and derive the weight value where the swapped items share the same score.
Our algorithm is a greedy approach and is very similar to a hill-climbing algorithm. Thus, there is always the risk to get stuck in a local optimum and not reaching the global optimum. Therefore, in Algorithm 5, we start with random weights and repeat the whole approximation process several times and take then the best solution. We measure the quality of a solution by resorting to the produced rank (originally ordered by score values) by the y-values. We simply count the number of swaps required for the bubble sort algorithm.
After finding for every minterm its weight, the final threshold ρ needs to be found. A simple solution is to try out the final score of every training input item as a threshold value and select the one providing the best accuracy.
Finally, since all minterms may have different weight values they cannot be combined with simple logical expressions for a good human understanding. A solution to that problem is to allow only discrete weight values from the unit interval [0, 1] as a result of Algorithm 4. All minterms with equal discrete weight values can then be combined and simplified by Boolean algebra rules and then be interpreted. For example, minterms of the simplified disjunctive form can be seen as sufficient conditions for the class decision whereas maxterms of the simplified conjunctive normal form can be seen as necessary conditions. In spite of these aspects with respect to human understanding, a technical agent can directly evaluate minterms of different weights for predictions.
6. Quantum-inspired ontology inference
The intent of ontology inference is the computation of semantic structures from data of observations gathered by a cognitive agent. These structures should serve the agent in reaching its goals by organizing its world as well as in communicating with other agents by determining the meaningful objects in its world and, therefore, the available terms. Because the goal of that inference is the veiled ontology we have to develop an algorithm for autoepistemic ontology inference in the sequel.
We start as in earlier study [82, 83] with the distinction of controllables and non-controllables but use these terms now for perceptors. Perceptors are used to represent the observations of the agent. Controllables are those for which the agent has sensors and commands actuators whereas non-controllables are those for which it only has sensors available. Additionally, as in [82, 83], we are searching for functional dependencies of non-controllables from controllables which arise from the fact that a vector space representing a controllable can be split into an inner orthogonal sum of subspaces representing parts of a non-controllable. If this sum can be expressed in terms of given basis vectors—which represent observation values of the controllable—then there is obviously a mapping from those values to the parts of the non-controllable—which represent observation values of the non-controllable in turn. Note, that a perceptor can also represent the result of classification as suggested at the end of Section 4.
Since we are using Hermitian observables as perceptors it follows from the spectral theorem (23) that there is always an orthogonal basis of eigenvectors. By encoding the observation values as eigenvalues of perceptors, we always get an inner orthogonal sum of eigenspaces for such a perceptor. By identifying controllables with Hilbert spaces and representing non-controllables within the tensor product space of all those spaces we can connect their observation values to each other.
In [82, 83], we checked dependencies after the agent made a full exploration of its world and computed the semantic structures afterward. Now, we show how semantic structures can be constructed during exploration for an agent to constantly structure its world. At the same time, we are able to say more about the mathematical nature of semantic structures through the reformulation end extension of our ontology inference algorithm.
We first concentrate on the perceptors since we are only interested in the observations the agent makes and what they reveal about the structure of its experienced world. An extended version of the algorithm would compute the actors in much the same way and gather even more structural information. To embed ontology inference into algebraic quantum theory and establish the necessary tools there is some preparatory work to do.
Let be a (complex) Hilbert space and an orthonormal basis of . For any element the term |b〉〈b| yields a projector, i.e., a Hermitian idempotent observable. The real vector space of Hermitian linear operators generated from is a (real) Banach space when equipped with the operator norm, a (real) Banach algebra when additionally equipped with composition as multiplication and eventually a (real) Banach-*-algebra when also equipped with adjunction as involution. It fulfills the C*-property and is a (real) C*-algebra. According to [84], such algebras are called B*-algebras but are also simply referred to as real C*-algebras in the literature [cf. 76]. However, it is shown by [76] that the complexification of a real C*-algebra is in fact a (complex) C*-algebra. Moreover, denote for a real C*-algebra 𝔄 its complexification with 𝔄ℂ, then the natural embedding of 𝔄 into 𝔄ℂ is a homeomorphic *-isomorphism [84]. In this sense, is a proper W*-subalgebra of , the W*-algebra of all bounded operators on a Hilbert space . Moreover, for a given Hilbert space an arbitrary orthonormal basis generates the commutative unital (real) W*-subalgebra and running over all orthonormal bases of is the real W*-subalgebra of 𝔐 comprising all projectors from and . Interestingly, the projector lattice in the real case is as well and the projector lattices of the are exactly the Boolean blocks of orthomodular .
During exploration controllables and non-controllables are represented by elements from for a suitable Hilbert space and a suitable orthonormal basis . Whenever a new observation is made the space is expanded if necessary by adding new vectors to preserving orthonormality. All perceptors are embedded into the then possibly bigger space and updated by adding a new component representing the suitable part of the observation by means of an additional eigenvalue equation. This gives rise to the first of three views on the set given by different partial orders and lattice structures.
Let be Hermitian linear operators, then—by utilizing the dominator from (35)—we set A ⊴ B iff DADB = DA and ADA = BDA. Whenever A ⊴ B holds for any then it follows that (SpecA\{0}) ⊆ (SpecB\{0}) which means that A is a possible predecessor of B during exploration. The (reflexive) partial order defines the history view on denoted by where every maximal element—any with DO = 1—represents a possible perceptor of the agent at the current exploration step and every maximal chain represents a possible history for the perceptor at its top whereas different chains result from different sequences of the made observations. As we already saw in Section 4, it is possible for different observables to result in the same perceptor from . This indistinguishability is reflected in the history view where only explorative contiguity is represented while the origin in terms of sensors is abstracted away.
Let (X, ≤) be a partial order. For any x ∈ X we denote by ≤ ↓(x) = {y ∈ X ∣ y ≤ x} the principal downset of x where we also omit the relation symbol if it is clear from the context. For a complete lattice (L, ∨, ∧) an element a ∈ L is called totally below an element b ∈ L denoted a ≪ b iff for any subset X ⊆ L with ∨X ≥ b there exists an element x ∈ X with a ≤ x (cf. [85]). Since is a vector space every element can be represented as a linear combination of elements from a basis. Moreover, setting , for any element the set contains exactly those basis vectors from the linear combination of B over and the elements from are exactly the minimal elements from with respect to ⊴. These properties are carried over to the Dedekind-MacNeille completion of which is a complete lattice where every element is the join of the elements totally below it. It follows from Theorem 1 from [86] that the Dedekind-MacNeille completion of is a completely distributive complete lattice. Following Theorem 2.9 from [85], a complete meet sublattice of a completely distributive complete lattice is a partial sup lattice, where the join is given as a partial operation ⊔ that agrees with the supremum whenever it is defined. In this sense, is a partial sup lattice where the complete meet-part means that every perceptor is composed of elements of and every two perceptors can have common histories (at least 0) whereas the partial sup-part means that if two perceptors evolve in different directions during exploration they will never unite again and if a perceptor already contains a certain component this cannot be overwritten by a different eigenvalue.
Partial sup lattices have another nice interpretation in our case. Since we want elements in semantic structures to represent objects in the agent's constructed world the complete meet-part could mean that every object is composed of individual sensor values whereas the partial sup-part could mean that not every composition yields a meaningful object. Then, the aim of the ontology inference algorithm could be the computation of several partial sup lattices as complete meet sublattices of a suitable view on . Those lattices should represent dependencies of non-controllables from controllables on the one hand and construct the meaningful objects in the world of the cognitive agent on the other hand. Additionally, they should allow the deduction of logical connections. Note, that this is not to be confused with compositionality (or non-compositionality) of (linguistic) semantics. We let the agent construct objects as meaningful combinations of sensor values and such combinations are only meaningful if they serve the agent in reaching one of its goals.
We define an equivalence relation ~ on utilizing the dominator from Equation (35). Let be Hermitian linear operators, then A ~ B iff DA = DB. Then, with the operations of lifted to the equivalence classes is a commutative unital (real) W*-algebra12 is isomorphic to its own projector lattice (cf. Section 3.2) and can be seen as a W*-subalgebra of . The partial order ≾ given through Equation (32) is also a partial order on when defined with the help of the dominator and can be lifted to . Equipped with the unary operation defined by ¬[A] = [1 − DA] the 6-tuple is a Boolean algebra and defines the logical view on denoted by . This view not only abstracts from origin like but also from the concrete encoding of observation values.
The third view, the semantic view on , is best introduced with the help of an example. We start with the situation depicted in Figure 7 where the cognitive agent Theseus sits on the field with x-coordinate 1 and y-coordinate 1. The world has an edge length of 3 and some glasses of water and a piece of cheese in it13. Theseus is equipped with the four sensors X, Y, C, and G yielding values for x- and y-coordinate and for the presence/absence of cheese respectively water. These sensors supply values from the sets of sensor values VX = {1, 2, 3}, VY = {1, 2, 3}, VG = {0, 1} and VC = {0, 1} and Theseus uses functions to transform sensor values from these sets into observation values. For a set of sensor values VS a real-valued function kS:VS → ℝ is called an encoding of VS iff kS is injective and kS(VS) ⊆ ℝ \ {0}. The second condition is due to using the special element 0 as representing the absence of knowing the concrete value [s. Equation (65)]. This use of encodings justifies the above definition of logical view since it abstracts from the concrete encodings used. Let be the set of controllables and let be the set of non-controllables. Then we associate to every a Hilbert space and get the world space by the tensor product over these spaces, i.e., 14. An individual Hilbert space is constructed in such a way that for every sensor value v ∈ VS an observation vector exists and the set builds an orthonormal basis of . Consequently, the set is the basis of world space. When Theseus makes an observation every sensor answers with a value from its corresponding value set. Observation is, therefore, a tuple 15 where we denote by mS the already encoded observation value for the sensor value of S in the observation tuple m and we denote by the controllable observation vector. We also associate to every sensor S a perceptor as a Hermitian linear operator over such that for every previously made observation m the eigenvalue equation holds. Before Theseus makes its first observation we initialize the Hilbert spaces for the controllables by and the corresponding bases consequently by . Every perceptor OS for is also initialized by OS = 0. The world space is a lonely point of dimension zero and .
Theseus' first observation according to Figure 7 is m1 = (1, 1, 0, 0) ∈ VX × VY × VC × VG. Then we extend the Hilbert spaces for by extending their bases through
obeying the orthonormality of the bases and updating all perceptors OS for by
where we first have to embed all perceptors into the new world space if it is now of a higher dimension. Clearly, holds if we denote the right hand side of (65) by . In our example, is now a one-dimensional line and a boolean algebra consisting of 0 and 1 with 0 ≠ 1.
Now, Theseus is moving one step in the x-direction and makes its second observation m2 = (2, 1, 0, 0). After extending the bases is a two-dimensional plane and is a boolean algebra with 0, 1, and two additional elements like depicted on the left of Figure 8. After updating all perceptors the relations 1 ~ X ~ Y ~ C ~ G hold where we now use the identifiers of the sensors also for their associated perceptors and the represented knowledge of the world seems rather boring. For a controllable and one of its basis vectors we introduce the so-called combinatorial basis projector which represents the subspace of corresponding to |kS(v)〉 and is defined by
We define as the basis projector PS, v which represents the already observed subspace of corresponding to |kS(v)〉. Note that, for a controllable the sum over all sensor values of S correspond to the notion of “focus operators” from [82] and the role of DOS in the relation between the combinatorial focus operator and its already observed counterpart generalizes the notion of “veridicality projector” from [82].
Combining this new notation with the equivalence relation ~ and explicitly adding some equivalent elements we can represent the situation in the spaces and respectively their embeddings in like in the middle respectively on the right of Figure 8 where equivalent elements are connected by snake lines16.
Theseus moves another step in the x-direction and makes the observation m3 = (3, 1, 0, 1). Updating the bases makes a three-dimensional space and results in the logical view on the left of Figure 9. Updating the perceptors and using the new notation OS|V, for a perceptor OS with and a subset of sensor values V ⊆ VS, defined by
where we also omit the curly brackets for singleton sets V = {v} and simply write OS|v, we end up with the structures in the middle and on the right of Figure 9.
The depicted structures from Figure 9 show that the absence of cheese can be predicted by knowing either the x-coordinate or the y-coordinate. But the presence/absence of water can only be predicted by knowing the x-coordinate. Both structures can be seen as complete meet sublattices (e.g., [85]) of where certain equivalence classes are listed multiple times by different representatives and not all joins are present (partial sup lattices, e.g., [85]). Because G|1 ≺ PY, 1 and G|0 ≺ PY, 1 both hold G|1 and G|0 cannot be expressed in terms of basis projectors of anymore. But they can still be expressed in terms of basis projectors of . In such situations, we omit the non-controllables in question from the structure for the controllable where they are no longer expressible.
After a step in the y-direction, Theseus makes the observation m4 = (3, 2, 0, 1) and we represent the structures for and combined in one structure as on the left of Figure 10 where 0, 1, and the atomar projectors are shared between the parts. The atomar projectors |22〉〈22| and |21〉〈21| correspond to the yet unobserved coordinates (2, 2) and (2, 1) which are only assumed to exist because of combinatorial reasons and are left out for space reasons. After another step in the y-direction and the observation m5 = (3, 3, 1, 1) we arrive at the structure on the right of Figure 10 where even more atomar projectors are not depicted. Now, we have a situation where every controllable can exactly predict one non-controllable.
A step in x-direction brings Theseus to the coordinates (2, 3) where it makes the observation (2, 3, 0, 0) which leads to the structure in Figure 11. Now, we have a different situation than before since C|1 depends on knowing x- and y-coordinate and cannot be predicted from one coordinate alone. This fact is reflected by the edges from C|1 to PX, 3 and PY, 3 showing that C|1 is only a part of them each. On the other hand, C|0 consists of parts from both sides of the structure but cannot be expressed by elements from only one side. This is best seen when the structure from Figure 11 is divided into three parts which are depicted in Figure 12 where the atomar projectors are hidden behind three dots for space reasons.
Figure 11. The semantic structure after m6 where the prediction of C|1 needs PX, 3 and PY, 3 and C|0 can be predicted on parts of X, respectively, Y.
Figure 12. The three parts (A–C) of the semantic structure from Figure 11 after m6.
On the left of Figure 12, one can see that the absence/presence of water separates the space of x-coordinates. The column with the number 3 is equivalent to the presence of water which constructs “column 3” as a meaningful object while the sum of columns 1 and 2—corresponding to logical OR—is equivalent to the absence of water. The structure in the middle of Figure 12 contains no non-controllabels which makes the space of y-coordinates useless on its own. The conditions for absence/presence of cheese are shown on the right of Figure 12. Presence of cheese is equivalent to the product of column 3 and row 3—corresponding to logical AND—while the absence of cheese is equivalent to the sum of columns 1 and 2 and rows 1 and 2—corresponding to logical OR, again. Thus, the “field (3, 3)” is constructed as a meaningful object. Although Theseus has not explored the whole labyrinth the structure represents the situation very well. A complete exploration would only add more edges from basis projectors to atomar projectors but would not change the structure above them. This way semantic structures allow for predictions on not yet explored parts of the world by simply exchanging basis projectors with their combinatorial counterparts and propagating the changes to perceptors.
After the informal introduction of semantic structures, we now define them with the help of labeled partial orders. Formally, a labeled partial order (lpo) over a set X of labels is a partial order ≤ on a set V of vertices together with a labeling function l:V → X written as a 3-tuple (V, ≤, l). Since sometimes we want equal observables to appear in different places within our semantic structures—because they have different origins and meanings and are, therefore, denoted by different symbols and should possibly be connected to different elements—labeled partial orders are an obvious choice here. To get distinguishable vertices, we augment every observable with its origin and context.
Let be a non-empty family of controllables for some c ∈ ℕ and be a non-empty family of non-controllables for some n ∈ ℕ, both after some exploration steps. Let be the associated Hilbert space for a controllable with its basis . Then the world space is . Let be the set of atomar projectors from for the basis of . For a controllable , we denote the set of its combinatorial basis projectors by , the set of its basis projectors by and for a non-empty family of controllables we denote its associated Hilbert space by . For a non-controllable , we denote the set of its components by PN = {N|V ∣ kN(V) ⊆ SpecN} = ⊴↓(N)\ {0}.
We use the members from and and the identifier as formal terms to denote origin as elements from and we use the formal sums for any non-empty family of controllables and the identifier to denote context as elements from . We will have a certain subset of to be the support of the semantic view on :
• ,
• ,
• and
• .
For any element , we denote by S𝔅 the observable in its first component, by S𝔒 the origin from its second component, and by Sℭ the context from its third component. For any origin o and any context c, we write o ∈ c iff the formal terms are equal (o = c) or c is the formal sum for a non-empty family of controllables and o is the formal term of a member of D. Next, we define the (reflexive) partial order ≼ on with the help of ≾ on . For any two elements , we set S ≼ T iff S𝔅 ≾ T𝔅 and one of the following holds:
• S = T (for reflexivity).
• S𝔒 ∈ Tℭ (for the standard ordering on ).
• and (for transitivity).
• T𝔒 ∈ Sℭ and S𝔅 ≁ T𝔅 (cf. C|1 in Figure 11).
• and S𝔅 ≁ T𝔅 (cf. C|0 and G|0 in Figure 11).
• and S𝔅 ~ T𝔅 and T ⋠ S (cf. C and G in Figure 8).
Following the same argumentation as in the case of , also the (reflexive) partial order is a partial sup lattice.
We define the labeling l on by explicitly listing the results for the different classes of elements of where the function values are to be understood as formal terms:
• We set .
• For and we set .
• For and P = PC, v∈PC we set .
• For and PN = N|V ∈ PN we set l((PN, N, ·)) = N|V.
Now, the (reflexive) labeled partial order defines the semantic view on denoted by .
For every non-empty family of controllables and a non-controllable we denote for a component PN ∈ PN of N by the component PN from N in the context of D and define the set of its lower D-basis projectors and the set of its upper D-basis projectors . A component PN ∈ PN of N is said to be predictable from D iff either or [PN] = ∏B∈↑D(PN)[B𝔅] holds. If there is a subset from ↓D(PN) respectively ↑D(PN) containing at least one element from every origin C∈D such that the family of basis projectors is linearly independent and [PN] is the sum respectively product over this family then PN is said to be minimally predictable from D.
In the case [PN] is the sum of basis projectors this corresponds to a logical OR. If the sum contains at least one of its basis projectors for every controllable C ∈ D and none of them is spurious if they are linearly independent there can be no smaller family D′ ⊂ D of controllables from which PN is predictable, justifying the term “minimally”. However, there can be families where at least one member is not contained in D with the same number or even fewer members. In the case [PN] is the product of basis projectors this corresponds to a logical AND. Again, in the case of linear independence, there can be no proper subfamily of D from which PN is predictable but there can be other families equal in size or with fewer members. In both cases, predictability is equivalent to the solvability of the eigenvalue equations of PN in the associated Hilbert space . Note, that for a non-controllable and a D-predictable component PN ∈ PN it not necessarily follows that any component with must be D-predictable.
Semantic structures like in our example arise from a goal the agent tries to reach. A goal is given as a family of components of non-controllables. Then, families of controllables are searched from where the components are minimally predictable. A semantic structure for a goal then consists of all these families as a substructure of . The meaningful objects are the components in their respective context while their composition is given by the order relation from and is, therefore, logically interpretable. A meaningful object is, therefore, a logical composition of controllable observation values in the context of a goal. Hence, the world's objects are highly contextual.
7. Discussion
In this study, we have outlined a framework of quantum-inspired cognitive agents for the leverage of artificial intelligence and cognitive dynamical systems [7, 14]. Inspired by Shannon's groundbreaking mouse-maze system, “Theseus” [5, 78], we have suggested an embodied, enactive, complex multi-agent cognitive system [6, 10, 16], to which we refer the DAGHT architecture [after five mythological characters: “Demiurge” (D), “Argus” (A), “Golem” (G), “Homunculus” (H), and “Theseus” (T)] that are connected within a communication network, depicted in Figure 1. This architecture relies upon an embedded perception-action cycle (PAC) (Figure 2), implemented by the Golem agent for the generation of actuator signals and the measurement of sensor signals within an interaction loop. Another agent, Homunculus, interprets sensory input during classification, articulates actuator instructions, and selects actions using representational symbolic data structures within a communication loop. In our approach, Shannon's original agent, Theseus, prescribes goals for its exploration behavior and infers semantic relationships (ontologies) among entities of the external world [7], while Argus actually determines what the agent can achieve and limits possible interactions between Golem and the environment. Finally, the overarching Demiurge stands for the behavioral and cognitive control unit.
In a first step, we have modified the classical PAC toward a quantum-inspired PAC (QPAC), by assigning operators of a suitably chosen observable algebra to its ingredients: to each PAC actuator, an action operator (or briefly actor) is attributed while each PAC sensor is related to a Hermitian perception operator (in short perceptor), such that actors correspond to controllable observables, whereas perceptors find their counterpart in non-controllable observables of a previous approach [82, 83]. Specifically, we have chosen the framework of algebraic quantum theory for our model [69, 71] since it clearly allows to discriminate ontic and epistemic descriptions in terms of structural C*-algebras on the one hand and probabilistic W*-algebras on the other hand [72]. The ontology of the agent's environment [7, 12, 13, 29, 30] is then uniquely characterized by the fundamental commutation relations of the underlying C*-algebra.
An important concept of algebraic quantum theory is the ontologically intrinsic properties, referring to a maximal abelian subalgebra of Hermitian observables, i.e., perceptors [72]. These operators provide a maximally Boolean description that is reflected by the possibility of simultaneously solving the respective eigenvalue equations of the epistemic W*-algebra of bounded operators on a Hilbert space. In the second step of our study, we have used the observation space [28] that is spanned by the solutions of these eigenvalue equations as the basis of a classification problem for the Homunculus agent. We have compared one standard machine learning technique, namely decision trees [7] that only lead to partitions of overlapping hypercubes of the observation space into classifiers with our newly developed algorithm, exploiting quantum probabilities instead [79, 80].
The third step is devoted to the autoepistemic ontology inference of the Theseus agent [50]. Based on the fundamental spectral theorem for Hermitian observables [70], we have developed an algorithm for the induction of semantic lattices in terms of partial sup lattices [85], which are generalizations of the projector lattices originally introduced to quantum logics [28, 58]. Although related to our previous study on ontology inference through Fock space modeling [82, 83], our algebraic approach is much more parsimonious and computationally efficient. While Fock space attempts suffer from the combinatorial explosion of tensor product powers, our novel innovation straightforwardly distinguishes between controllable action and non-controllable perception operators where only the former is taken into account for generating a low-dimensional tensor product “world space”. Hermitian perceptors, by contrast, are described by their spectral decompositions and the resulting partial sup lattice structures [85]. In an example, we showed how the agent's world model is constantly restructured during exploration while new observations are incorporated. We also showed how this model can be used to make predictions on the not yet explored parts of the agent's world.
Our study is consistent with related approaches to quantum cognition and quantum-inspired computation [48–50] and can be generalized toward other fields of active research, such as conceptual spaces, latent semantic analysis (LSA), geometric information retrieval, or vector symbolic architectures [8, 52–56].
Future research on quantum-inspired cognitive agents may address a number of pertinent problems in artificial intelligence and cognitive systems science. First of all, the philosophical peculiarities induced by a rather “naive” interpretation of “representation” [6, 9–11] could be resolved in a generalized framework of dynamic semantics [63–65] by literally understanding mental representations as operators acting upon the (geometric) belief space of the cognitive agent [56, 62, 66–68]. Regarding those belief spaces as Hilbert spaces equipped with a scalar product, allows the assessment of similarity for classification and concept formation, thus solving the symbol grounding problem in conceptual space [8]. Through the definition of tensor product states in representation space, concepts could be entangled as suggested for non-compositional semantics [87]. Moreover, superposition states could not be simultaneous eigenstates of complementary observables. Since this is reflected by the non-distributivity of the corresponding quantum logics, the frame problem [20] could be reformulated as the problem to find the most relevant Boolean sublattice in an orthomodular lattice structure, as resulting from our ontology inference algorithm.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
Author contributions
RR contributed the DAGHT architecture. PbG provided the quantum-inspired perception-action cycle. IS and GW contributed the quantum-logic-inspired classification. MH-L provided the quantum-inspired ontology inference. All authors contributed to manuscript revision, read, and approved the submitted version.
Funding
This study was partly funded by the Federal Ministry of Education and Research in Germany under Grant Number 03IHS022A.
Acknowledgments
We thank Reinhard Blutner and two referees for their valuable comments to improve the manuscript.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Footnotes
1. ^For Shannon's instructive video demonstration, look at https://www.youtube.com/watch?v=vPKkXibQXGA.
2. ^Based on the terms used in control theory, we represent the cyclical flow of information in a circular form and denote the cyclical flow as a loop.
3. ^In [33] some of the authors studied image schemas which can be seen as results of “perceptual analysis” [34] of the mentioned “action schemas”.
4. ^In mathematics one often assumes the stronger self-adjoined property [57, 69, 71].
5. ^Note that an algebra does not necessarily contain an identity, i.e., a neutral element with respect to algebraic multiplication. If this is the case, the algebra is called unital.
6. ^Lie algebras provide an example for non-unital algebras. However, any Lie algebra can be embedded into a larger unital algebra, called its universal envelope, which is U(𝔰𝔲(2)) in our case.
7. ^A representation π is a (C*-algebra) homomorphism, i.e., a structure-preserving linear mapping, from a C*-algebra 𝔄 into the special C*-algebra of bounded operators acting on a Hilbert space , denoted as representation space of π. Hence, the “meaning” of an abstract element B∈𝔄 is its impact on Hilbert space, π(B)|a〉 = |c〉, ; in analogy to the ABC scheme of dynamic semantics [63–65].
8. ^Unless specified otherwise, we neglect the reference state ρ of the respective GNS constructions subsequently.
9. ^For the sake of simplicity, we identify observables from a C*-algebra 𝔄 and its W* representation in the sequel.
10. ^For atomar projectors |k〉〈k|, |i〉〈i|, (i ≠ k) orthogonality is simply proven as
11. ^Also called range projector in [76].
12. ^When the operator norm is defined as supremum over all operator norms of all elements of an equivalence class.
13. ^For the sake of simplicity and smaller graphics, we limit ourselves to cheese and water and omit the bacon.
14. ^For the rest of this section, we pretend that the tensor product is commutative since the resulting spaces are equal up to isomorphism.
15. ^The last footnote holds analogously for the Cartesian product, too.
16. ^These lines are only to visualize that the partial order of the projector lattice has to be refined on to be useful.
References
1. Pias C, editor. Cybernetics: The Macy Conferences 1946-1953. The Complete Transactions. Chicago, IL: Chicago University Press (2016).
2. Wiener N. Cybernetics or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press (1948).
4. Shannon CE, Weaver W. The Mathematical Theory of Communication. Urbana, IL: University of Illinois Press;. (1949).
5. Shannon CE. Computers and automata. Proc Instit Radio Eng. (1953) 41:1234–41. doi: 10.1109/JRPROC.1953.274273
6. Varela FJ, Thompson E, Rosch E. The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: MIT Press (2017). doi: 10.7551/mitpress/9780262529365.001.0001
7. Russell S, Norvig P. Artificial Intelligence: A Modern Approach. 3rd ed. Upper Saddle River, NJ: Pearson (2010).
9. Dreyfus HL. Why Heideggerian AI failed and how fixing it would require making it more Heideggerian. Philos Rev. (2007) 20:247–68. doi: 10.1080/09515080701239510
10. von Glasersfeld E. An introduction to radical constructivism. In: Watzlawick P, editor. The Invented Reality: How Do We Know What We Believe We Know? Contributions to Constructivism. New York, NY: Norton (1984). p. 17–40.
11. Locatelli R, Wilson KA. Introduction: perception without representation. Topoi. (2017) 36:197–212. doi: 10.1007/s11245-017-9460-1
12. Quine WV. Ontological Relativity and Other Essays. New York, NY; London: Columbia University Press (1969). doi: 10.7312/quin92204
13. Osherson DN. Three conditions on conceptual naturalness. Cognition. (1978) 6:263–89. doi: 10.1016/0010-0277(78)90001-X
14. Haykin S. Cognitive Dynamic Systems. Cambridge: Cambridge University Press (2012). doi: 10.1017/CBO9780511818363
15. Eliasmith C. How to Build a Brain: A Neural Architecture for Biological Cognition. New York, NY: Oxford University Press (2013).
16. Hajduk M, Sukop M, Haun M. Multi-agent Systems-Terminology and Definitions. In: Cognitive Multi-agent Systems. Studies in Systems, Decision and Control. Cham: Springer International Publishing (2019), 138. p. 1–9.
17. Alonso E, Karcanias N, Hessami AG. Multi-agent systems: a new paradigm for systems of systems. In: ICONS 2013: The Eighth International Conference on Systems. Seville: IRIA XPS Press (2013).
18. Steels L, Belpaeme T. Coordinating perceptually grounded categories through language: a case study for colour. Behav Brain Sci. (2005) 28:469–89. doi: 10.1017/S0140525X05000087
19. Harnad S. The symbol grounding problem. Phys D. (1990) 42:335–46. doi: 10.1016/0167-2789(90)90087-6
20. Shanahan M. The frame problem. In: Zalta EN, editor. The Stanford Encyclopedia of Philosophy. Stanford, CA: Metaphysics Research Lab, Stanford University (2016). Available online at: https://plato.stanford.edu/archives/spr2016/cite.html
21. Fodor J, Pylyshyn ZW. Connectionism and cognitive architecture: a critical analysis. Cognition. (1988) 28:3–71. doi: 10.1016/0010-0277(88)90031-5
22. Pinker S, Jackendoff R. The faculty of language: what's special about it? Cognition. (2005) 95:201–36. doi: 10.1016/j.cognition.2004.08.004
23. von Uexküll J. The theory of meaning. Semiotica. (1982) 42:25–79. doi: 10.1515/semi.1982.42.1.25
24. Fuster JM. Upper processing stages of the perception-action cycle. Trends Cogn Sci. (2004) 8:143–5. doi: 10.1016/j.tics.2004.02.004
25. Baranyi P, Csapo A, Sallai G. Cognitive Infocommunications (CogInfoCom). Cham: Springer (2015). doi: 10.1007/978-3-319-19608-4
26. Römer R, beim Graben P, Huber-Liebl M, Wolff M. Unifying physical interaction, linguistic communication, and language acquisition of cognitive agents by minimalist grammars. Front Comput Sci. (2022) 4:733596. doi: 10.3389/fcomp.2022.733596
27. Römer R, beim Graben P, Huber M, Wolff M, Wirsching G, Schmitt I. Behavioral control of cognitive agents using database semantics and minimalist grammars. In: Proceedings of the 10th IEEE International Conference on Cognitive Infocommunications (CogInfoCom). Naples (2019). p. 73–8. doi: 10.1109/CogInfoCom47531.2019.9089947
28. Birkhoff G, von Neumann J. The logic of quantum mechanics. Ann Math. (1936) 37:823–43. doi: 10.2307/1968621
29. Allemang D, Hendler JA. Semantic Web for the Working Ontologist: Modeling in RDF, RDFS and OWL. Amsterdam; Boston, MA: Morgan Kaufmann Publishers/Elsevier (2008).
30. Cohen PR. Growing ontologies. In: Bagnara S, editor. Proceedings of the 3rd European Conference on Cognitive Science. Sienna: Rijksuniversiteit Groningen, Instituto di Psicologica (1999).
31. Gruber TR. A Translation approach to portable ontology specifications. Knowledge Acquisit. (1993) 5:199–220. doi: 10.1006/knac.1993.1008
32. Genesereth MR, Nilsson NJ. Logical Foundations of Artificial Intelligence. Los Altos, CA: Morgan Kaufmann Publishers/Elsevier (1987).
33. Huber M, Wolff M, Meyer W, Jokisch O, Nowack K. Some design aspects of a cognitive user interface. Online J Appl Knowledge Manage. (2018) 6:15–29. doi: 10.36965/OJAKM.2018.6(1)15-29
34. Mandler JM. How to build a baby: on the development of an accessible representational system. Cogn Dev. (1988) 3:113–36. doi: 10.1016/0885-2014(88)90015-9
35. Patel-Schneider P, Parsia B, Motik B. OWL 2 Web Ontology Language Structural Specification Functional-Style Syntax. W3C (2009). Available online at: https://www.w3.org/TR/2009/REC-owl2-syntax-20091027/
36. Motik B, Grau BC, Patel-Schneider P. OWL 2 Web Ontology Language Direct Semantics. 2nd ed. W3C (2012). Available online at: https://www.w3.org/TR/2012/REC-owl2-direct-semantics-20121211/
37. Musen MA, Team TP. Protégé ontology editor. In: Werner D, Olaf W, Kwang-Hyun C, Hiroki Y, editors. Encyclopedia of Systems Biology. New York, NY: Springer (2013). p. 1763–5. doi: 10.1007/978-1-4419-9863-7_1104
38. Tudorache T, Nyulas C, Noy NF, Musen MA. WebProtégé: a collaborative ontology editor and knowledge acquisition tool for the web. Semant Web. (2013) 4:89–99. doi: 10.3233/SW-2012-0057
39. Wong W, Liu W, Bennamoun M. Ontology learning from text. ACM Comput Surveys. (2012) 44:1–36. doi: 10.1145/2333112.2333115
40. Lenat DB, Guha RV, Pittman K, Pratt D, Shepherd M. CYC: toward programs with common sense. Commun ACM. (1990) 33:30–49. doi: 10.1145/79173.79176
42. Neches R, Fikes RE, Finin T, Gruber T, Patil R, Senator T, et al. Enabling technology for knowledge sharing. AI Mag. (1991) 12:36.
43. Piaget J. The Origins of Intelligence in Children. New York, NY: International Universities Press, Inc. (1952).
45. Hoffmann R, Wolff M. Towards hierarchical cognitive systems for intelligent signal processing. In: Markovski S, Gusev M, editors. ICT Innovations 2012, Secure and Intelligent Systems, Ohrid, Macedonia, Sep. 2012, Proceedings, Advances in Intelligent Systems and Computing. Berlin; Heidelberg: Springer (2012). p. 613–8.
46. Römer R. Investigations on probabilistic analysis synthesis systems using bidirectional HMMs. In: Markovski S, Gusev M, editors. ICT Innovations 2012, Secure and Intelligent Systems, Ohrid, Macedonia, Sep. 2012, Proceedings, Advances in Intelligent Systems and Computing. Berlin; Heidelberg: Springer (2012). p. 642–7.
47. Kahneman D, Tversky A. Prospect theory: an analysis of decision under risk. Econometrica. (1979) 47:263–91. doi: 10.2307/1914185
48. Busemeyer JR, Bruza PD, editors. Quantum Models of Cognition and Decision. Cambridge: Cambridge University Press (2012). doi: 10.1017/CBO9780511997716
49. Pothos EM, Busemeyer JR. Can quantum probability provide a new direction for cognitive modeling? Behav Brain Sci. (2013) 36:255–74. doi: 10.1017/S0140525X12001525
50. Blutner R, beim Graben P. Quantum cognition and bounded rationality. Synthese. (2016) 193:3239–91. doi: 10.1007/s11229-015-0928-5
51. Gigerenzer G, Goldstein DG. Reasoning the fast and frugal way: models of bounded rationality. Psychol Rev. (1996) 103:650–69. doi: 10.1037/0033-295X.103.4.650
53. van Rijsbergen CJ. The Geometry of Information Retrieval. Cambridge: Cambridge University Press (2004) doi: 10.1017/CBO9780511543333
54. Gayler RW. Vector symbolic architectures are a viable alternative for Jackendoff's challenges. Behav Brain Sci. (2006) 29:78–9. doi: 10.1017/S0140525X06309028
55. Kanerva P. Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors. Cogn Comput. (2009) 1:139–59. doi: 10.1007/s12559-009-9009-8
56. beim Graben P, Huber M, Meyer W, Römer R, Wolff M. Vector symbolic architectures for context-free grammars. Cogn Comput. (2021) 14:733–48. doi: 10.1007/s12559-021-09974-y
57. Primas H. Chemistry, Quantum Mechanics and Reductionism. Berlin: Springer (1981). doi: 10.1007/978-3-662-11314-1
58. Svozil K. Quantum logic. A brief outline. arXiv preprint arXiv:quant-ph/9902042. (2005). doi: 10.48550/arXiv.quant-ph/9902042
59. Foulis DJ, Randall CH. Operational statistics. I. Basic concepts. J Math Phys. (1972) 13:1667–75. doi: 10.1063/1.1665890
60. Randall CH, Foulis DJ. Operational statistics. II. Manuals of operations and their logics. J Math Phys. (1973) 14:1472–80. doi: 10.1063/1.1666208
61. Foulis DJ. A half-century of quantum logic. What have we learned? In: Aerts D, editor. Quantum Structures and the Nature of Reality. vol. 7 of Einstein Meets Magritte: An Interdisciplinary Reflection on Science, Nature, Art, Human Action and Society. Dordrecht: Kluwer (1999). p. 1–36. doi: 10.1007/978-94-017-2834-8_1
62. beim Graben P, Potthast R. Inverse problems in dynamic cognitive modeling. Chaos. (2009) 19:015103. doi: 10.1063/1.3097067
63. Skinner BF. Verbal Behavior. New York, NY: Appleton-Century-Crofts (1957). doi: 10.1037/11256-000
64. Gärdenfors P. Knowledge in Flux. Modeling the Dynamics of Epistemic States. Cambridge, MA: MIT Press (1988).
65. beim Graben P. Order effects in dynamic semantics. Top Cogn Sci. (2014) 6:67–73. doi: 10.1111/tops.12063
66. beim Graben P. Quantum representation theory for nonlinear dynamical automata. In: Wang R, Gu F, Shen E, editors. Advances in Cognitive Neurodynamics. Proceedings of the International Conference on Cognitive Neurodynamics, ICCN 2007. Berlin: Springer (2008). p. 469–73. doi: 10.1007/978-1-4020-8387-7_81
67. Carmantini GS, beim Graben P, Desroches M, Rodrigues S. A modular architecture for transparent computation in recurrent neural networks. Neural Netw. (2017) 85:85–105. doi: 10.1016/j.neunet.2016.09.001
68. beim Graben P, Gerth S. Geometric representations for minimalist grammars. J Logic Lang Inform. (2012) 21:393–432. doi: 10.1007/s10849-012-9164-2
69. Haag R. Local Quantum Physics: Fields, Particles, Algebras. Berlin: Springer (1992). doi: 10.1007/978-3-642-97306-2
72. Atmanspacher H, Primas H. Epistemic and ontic quantum realities. In: Castell L, Ischebeck O, editors. Time, Quantum, and Information. Berlin: Springer (2003). p. 301–21. doi: 10.1007/978-3-662-10557-3_20
73. Dirac PAM. A new notation for quantum mechanics. Math Proc Cambridge Philos Soc. (1939) 35:416–8. doi: 10.1017/S0305004100021162
74. Folland GB. Harmonic Analysis in Phase Space. Princeton: Princeton University Press (1989). doi: 10.1515/9781400882427
75. Edmonds AR. Angular Momentum in Quantum Mechanics. Princeton: Princeton University Press (1957). doi: 10.1515/9781400884186
76. Chu CH, Dang T, Russo B, Ventura B. Surjective isometries of real C*-algebras. J London Math Soc. (1993) 47:97–118. doi: 10.1112/jlms/s2-47.1.97
77. Mittelstaedt P. Quantum Logic. Dordrecht: D. Reidel Publishing Company (1978). doi: 10.1007/978-94-009-9871-1
78. Sutherland IE. A method for solving arbitrary-wall mazes by computer. IEEE Trans Comput. (1969) 18:1092–7. doi: 10.1109/T-C.1969.222592
79. Schmitt I, Wirsching G, Wolff M. Quantum-based modelling of database states. In: Diederik A, Andrei K, Massimo M, Bourama T, editors. Quantum-Like Models for Information Retrieval and Decision-Making. Cham: Springer (2019). p. 115–27. doi: 10.1007/978-3-030-25913-6_6
80. Wirsching G, Schmitt I, Wolff M. Quantenlogik-Eine Einfuhrung fur Ingenieure und Informatiker. Springer-Lehrbuch.
81. Schmitt I. Incorporating Weights into a quantum-logic-based query language. In: Quantum-Like Models for Information Retrieval and Decision-Making. Springer (2019). p. 129–43. doi: 10.1007/978-3-030-25913-6_7
82. Wolff M, Huber M, Wirsching G, Römer R, beim Graben P, Schmitt I. Towards a quantum mechanical model of the inner stage of cognitive agents. 2018 9th IEEE International Conference on Cognitive Infocommunications (CogInfoCom). Budapest (2018).
83. Huber M, Römer R, Meyer W, beim Graben P, Wolff M. Struktur und Bedeutung-Theseus Reloaded. Cottbus: FG Kommunikationstechnik, BTU Cottbus-Senftenberg (2019).
86. Raney GN. A subdirect-union representation for completely distributive complete lattices. Proc Am Math Soc. (1953) 4:518–22. doi: 10.1090/S0002-9939-1953-0058568-4
Keywords: cognitive agents, cognitive dynamical systems, artificial intelligence, semantic representations, quantum logic, quantum cognition, classifiers, ontology
Citation: Huber-Liebl M, Römer R, Wirsching G, Schmitt I, beim Graben P and Wolff M (2022) Quantum-inspired cognitive agents. Front. Appl. Math. Stat. 8:909873. doi: 10.3389/fams.2022.909873
Received: 31 March 2022; Accepted: 03 August 2022;
Published: 06 September 2022.
Edited by:
Norbert Marwan, Potsdam Institute for Climate Impact Research (PIK), GermanyReviewed by:
Peter Bruza, Queensland University of Technology, AustraliaFrancesco Bianchini, University of Bologna, Italy
Copyright © 2022 Huber-Liebl, Römer, Wirsching, Schmitt, beim Graben and Wolff. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Markus Huber-Liebl, bWFya3VzLmh1YmVyJiN4MDAwNDA7Yi10dS5kZQ==