Along with the great success of deep neural networks (DNNs) in neuroscience, computer vision, natural language processing (NLP) and other fields, where the data may have complex intrinsic patterns, there is also growing concern about their black-box nature. The outputs of the models are usually produced from the designed inputs with near-intuitive intelligence, which leads to their underlying mechanisms of decision making have not yet been well-understood. When DNNs are used in the real-world environment, it is even more complex. In addition, the interpretability of AI is gradually being improved with the development of neuroscience, and affects people’s trust in AI. Moreover, interpretability is a desired property to promote the integration between the computational modeling of the brain and basic neuroscience.
The objective of this Special Section is to bring together the state-of-the-art research contributions that address key aspects of DNN based decision-making interpretability, reflecting the effective synergy and collaboration between AI and neuroscience, such as advance in Cognitive Neuroscience & Artificial Intelligence, decision-theoretic approaches to neuroscience, cognitive robotics, AI system inspired by neuroscience, drug discovery and genomics, and etc.
Topics of interest for this special section include, but are not limited to:
- Advance in Cognitive Neuroscience & Artificial Intelligence
- Decision-theoretic approaches to neuroscience
- Theoretical research on decision-making artificial intelligence
- Computer vision inspired by neuroscience
- Passive and active interpretation of DNNs
- Causal modeling and causal reasoning
- Trustworthy AI
- Deep learning theory (supervised/unsupervised/reinforcement learning)
- Computer-aided diagnosis
- Cognitive Robotics
- Drug discovery and genomics
- DNNs visualization and attention analysis
- AI system inspired by neuroscience (surveillance and monitoring, analysis and processing of medical images, document understanding and creation, etc.)
- Human perception-based multimedia retrieval (Healthcare, Fintech, large video archives, etc.)
- Application of autonomous vehicles
- Applications of mortgage qualification, credit, and insurance risk assessments
We also highly recommend the submission of multimedia with each article as it significantly increases the visibility, downloads, and citations of articles.
Along with the great success of deep neural networks (DNNs) in neuroscience, computer vision, natural language processing (NLP) and other fields, where the data may have complex intrinsic patterns, there is also growing concern about their black-box nature. The outputs of the models are usually produced from the designed inputs with near-intuitive intelligence, which leads to their underlying mechanisms of decision making have not yet been well-understood. When DNNs are used in the real-world environment, it is even more complex. In addition, the interpretability of AI is gradually being improved with the development of neuroscience, and affects people’s trust in AI. Moreover, interpretability is a desired property to promote the integration between the computational modeling of the brain and basic neuroscience.
The objective of this Special Section is to bring together the state-of-the-art research contributions that address key aspects of DNN based decision-making interpretability, reflecting the effective synergy and collaboration between AI and neuroscience, such as advance in Cognitive Neuroscience & Artificial Intelligence, decision-theoretic approaches to neuroscience, cognitive robotics, AI system inspired by neuroscience, drug discovery and genomics, and etc.
Topics of interest for this special section include, but are not limited to:
- Advance in Cognitive Neuroscience & Artificial Intelligence
- Decision-theoretic approaches to neuroscience
- Theoretical research on decision-making artificial intelligence
- Computer vision inspired by neuroscience
- Passive and active interpretation of DNNs
- Causal modeling and causal reasoning
- Trustworthy AI
- Deep learning theory (supervised/unsupervised/reinforcement learning)
- Computer-aided diagnosis
- Cognitive Robotics
- Drug discovery and genomics
- DNNs visualization and attention analysis
- AI system inspired by neuroscience (surveillance and monitoring, analysis and processing of medical images, document understanding and creation, etc.)
- Human perception-based multimedia retrieval (Healthcare, Fintech, large video archives, etc.)
- Application of autonomous vehicles
- Applications of mortgage qualification, credit, and insurance risk assessments
We also highly recommend the submission of multimedia with each article as it significantly increases the visibility, downloads, and citations of articles.