Human-centric AI aims at designing and developing systems that operate along with humans in a cognitively-compatible and synergetic way. Such systems are required to exhibit human-like cognitive abilities and intelligence, either at the general level of the human population or at some specialized level of expertise in a specific field. In either case, human-centric AI systems exist as (expert) companions or peers of their human users that would operate alongside them to support and enhance their capabilities.
To achieve this aim, human-centric AI systems would need to be built based on a computational understanding of the human mind and its various faculties. Most importantly, and as a first step, one would need to understand how human cognition operates at its many different levels and how this can guide the development of AI systems at several places in their design, such as:
• Comprehending the data or information received from their environment.
• Deciding on appropriate actions to take and explaining this to users.
• Learning new knowledge pertaining to their problem domain.
• Interacting in a natural way with users, explaining and debating positions.
• Ensuring that their operation and general behavior is humanly ethical.
These multi-faceted challenges expected from human-centric AI systems would be best addressed through an interdisciplinary approach that bridges and integrates within AI elements from several other disciplines such as Cognitive Science, Linguistics and Philosophy.
Setting the computational understanding and automation of human cognition at the center of human-centric AI systems would require a foundational shift in the logical framework underlying the computation of intelligent solutions to problems or tasks. A logical shift that would naturally align with the characteristics of human cognition and human reasoning. A shift away from the stern and absolute guarantees of classical and formal logical inference underlying conventional computer systems, toward a more flexible logical framework that is tolerant to uncertainty and incompatible alternatives in the information, while adaptable to new information pertaining to the ambiguity in the existing information. In effect, such a foundational logical shift would be required to move the focus of human-centric AI systems away from absolute correctness and optimality of solutions to satisficing solutions that provide an acceptable balance between a variety of criteria and are useful enough for their users to take a final decision on the matter at hand.
This Research Topic explores the possibility for this foundational logical shift to be that of replacing the formal framework of Classical Logic, which has served as the “Calculus of Computer Science” so far, with that of Argumentation, serving as the “Calculus for Human-centric AI”. The aim is to expose and explore the possible central and foundational role of argumentation for Human-centric AI at all levels of such systems, whether this is at the level of learning knowledge or reasoning with knowledge, or interacting and cooperating between such systems and their users to solve problems.
This position for the foundational role of argumentation in human-centric AI rests on the central premise of the close link of argumentation with human cognition and reasoning, as advocated by many studies in Philosophy and Cognitive Psychology. The study of argumentation for many years over a wide spectrum of disciplines, from Philosophy & Rhetoric to Language & Cognition, and recently in AI for Explainable Decision Making and Machine Learning, reveals its universality and its suitability for an interdisciplinary approach to human-centric AI. Furthermore, recent work has shown that reasoning via (dialectic) argumentation includes, as a special boundary case, that of classical strict reasoning and, hence, argumentation can provide a smooth paradigm-shift from a logic that is suitable for the closed nature of problems in conventional computing to one suitable for problems in the open realm of AI.
This Research Topic aims to attract contributions that investigate the possible foundational role of argumentation in modeling human cognition and to explore how this can provide a cognitive bridge between machine and human intelligence for the design and implementation of human-centric AI systems. The main areas of investigation are the following:
• Logical frameworks of argumentation
• Argumentation and computational models of human reasoning and learning
• Argumentation languages for representing and learning cognitive knowledge
• Argumentation and human-like interaction of systems
• Argumentation and human reasoning in Natural Language
• Argumentation mining and debate
• Cognitive experiments for the role of argumentation in human intelligence
• Cognitively- and socially-adequate explanations via argumentation
• Explainable Decision Making and Argumentation
• Argumentation, induction and Explainable Machine Learning
• Argumentation-based justification and persuasion
• Argumentation and “ethicacy” (degree of adherence to ethical values) of AI systems.
• Identifying and addressing Human and Machine biases via argumentation
• Argumentation for Robust, Trustworthy, and Reliable AI
• Argumentation and neural-symbolic cognitive integration
• Understanding and enhancing Deep Learning via argumentation
• Argumentation and embodiment of systems in their environment
• Argumentation and the integration of perception and cognition
• Cognitive Architectures and Argumentation
• Theory and Practice of argumentation for AI systems
• Methodologies for building argumentation-based practical AI systems
• Argumentation for perspicuous computing
• Evaluation of argumentation in AI applications
• Argumentation open Infrastructure/Platforms on the Web
Particular attention will be given to contributions whose work is based on an inter-disciplinary study, where the ideas are presented and validated through a synthesis of different perspectives and approaches.
Human-centric AI aims at designing and developing systems that operate along with humans in a cognitively-compatible and synergetic way. Such systems are required to exhibit human-like cognitive abilities and intelligence, either at the general level of the human population or at some specialized level of expertise in a specific field. In either case, human-centric AI systems exist as (expert) companions or peers of their human users that would operate alongside them to support and enhance their capabilities.
To achieve this aim, human-centric AI systems would need to be built based on a computational understanding of the human mind and its various faculties. Most importantly, and as a first step, one would need to understand how human cognition operates at its many different levels and how this can guide the development of AI systems at several places in their design, such as:
• Comprehending the data or information received from their environment.
• Deciding on appropriate actions to take and explaining this to users.
• Learning new knowledge pertaining to their problem domain.
• Interacting in a natural way with users, explaining and debating positions.
• Ensuring that their operation and general behavior is humanly ethical.
These multi-faceted challenges expected from human-centric AI systems would be best addressed through an interdisciplinary approach that bridges and integrates within AI elements from several other disciplines such as Cognitive Science, Linguistics and Philosophy.
Setting the computational understanding and automation of human cognition at the center of human-centric AI systems would require a foundational shift in the logical framework underlying the computation of intelligent solutions to problems or tasks. A logical shift that would naturally align with the characteristics of human cognition and human reasoning. A shift away from the stern and absolute guarantees of classical and formal logical inference underlying conventional computer systems, toward a more flexible logical framework that is tolerant to uncertainty and incompatible alternatives in the information, while adaptable to new information pertaining to the ambiguity in the existing information. In effect, such a foundational logical shift would be required to move the focus of human-centric AI systems away from absolute correctness and optimality of solutions to satisficing solutions that provide an acceptable balance between a variety of criteria and are useful enough for their users to take a final decision on the matter at hand.
This Research Topic explores the possibility for this foundational logical shift to be that of replacing the formal framework of Classical Logic, which has served as the “Calculus of Computer Science” so far, with that of Argumentation, serving as the “Calculus for Human-centric AI”. The aim is to expose and explore the possible central and foundational role of argumentation for Human-centric AI at all levels of such systems, whether this is at the level of learning knowledge or reasoning with knowledge, or interacting and cooperating between such systems and their users to solve problems.
This position for the foundational role of argumentation in human-centric AI rests on the central premise of the close link of argumentation with human cognition and reasoning, as advocated by many studies in Philosophy and Cognitive Psychology. The study of argumentation for many years over a wide spectrum of disciplines, from Philosophy & Rhetoric to Language & Cognition, and recently in AI for Explainable Decision Making and Machine Learning, reveals its universality and its suitability for an interdisciplinary approach to human-centric AI. Furthermore, recent work has shown that reasoning via (dialectic) argumentation includes, as a special boundary case, that of classical strict reasoning and, hence, argumentation can provide a smooth paradigm-shift from a logic that is suitable for the closed nature of problems in conventional computing to one suitable for problems in the open realm of AI.
This Research Topic aims to attract contributions that investigate the possible foundational role of argumentation in modeling human cognition and to explore how this can provide a cognitive bridge between machine and human intelligence for the design and implementation of human-centric AI systems. The main areas of investigation are the following:
• Logical frameworks of argumentation
• Argumentation and computational models of human reasoning and learning
• Argumentation languages for representing and learning cognitive knowledge
• Argumentation and human-like interaction of systems
• Argumentation and human reasoning in Natural Language
• Argumentation mining and debate
• Cognitive experiments for the role of argumentation in human intelligence
• Cognitively- and socially-adequate explanations via argumentation
• Explainable Decision Making and Argumentation
• Argumentation, induction and Explainable Machine Learning
• Argumentation-based justification and persuasion
• Argumentation and “ethicacy” (degree of adherence to ethical values) of AI systems.
• Identifying and addressing Human and Machine biases via argumentation
• Argumentation for Robust, Trustworthy, and Reliable AI
• Argumentation and neural-symbolic cognitive integration
• Understanding and enhancing Deep Learning via argumentation
• Argumentation and embodiment of systems in their environment
• Argumentation and the integration of perception and cognition
• Cognitive Architectures and Argumentation
• Theory and Practice of argumentation for AI systems
• Methodologies for building argumentation-based practical AI systems
• Argumentation for perspicuous computing
• Evaluation of argumentation in AI applications
• Argumentation open Infrastructure/Platforms on the Web
Particular attention will be given to contributions whose work is based on an inter-disciplinary study, where the ideas are presented and validated through a synthesis of different perspectives and approaches.