This Research Topic aims to delve into the cutting-edge advancements in artificial intelligence, focusing on the explainability of patterns in high-dimensions, ranging from brain-computer interfacing and neuroimaging applications to automation systems, IoT sensors, and computer vision-based systems, operating by learning intricate patterns from diverse datasets. These patterns are essential for informed decision-making. However, to harness these patterns effectively, algorithms must decipher complex spatial and temporal relationships within the data. This challenge is magnified by the presence of noise, non-stationarity, non-linearity, and autocorrelation, as well as sparse and autoregressive relationships. Traditional machine learning (ML) methods, which often relied on costly feature engineering, struggled to cope with the curse of dimensionality. This themed article collection will explore recent developments, methodologies, and applications in these areas, shedding light on emerging trends and innovative solutions.
In recent years, deep learning models such as transformers and autoencoders have emerged as promising solutions for unravelling intricate relationships within dynamic systems. However, these models come not only with significant computational requirements but also with little auditability. The computational requirements lead researchers to explore compressed encoding schemes and methods to regulate the growth of loss functions. One innovative approach involves employing characteristic temporal and spatial attention features to filter relevant information. This not only reduces computational and memory needs but also enhances learning performance in subsequent layers of deep networks. These enhancements are pivotal for the development of online learning networks and the creation of futuristic real-time intelligent systems capable of handling vast amounts of big data. Little auditability leads researchers to strive to enhance explainability through saliency maps, gradient maps, and layer-wise relevance propagation maps. These techniques shed light on making rational connections between the feature components and the classifier’s performance, contributing to intelligent systems to assist jobs where the responsibility of decisions matters.
The primary aim of this collection is to showcase a diverse yet complementary array of contributions that highlight new developments and applications in deep learning and hybrid machine learning. These advancements are specifically tailored to tackle challenges posed by spatio-temporal datasets in various applications mentioned earlier. We welcome authors to submit original and unpublished works that leverage spatial and temporal features from data with contextual temporal information across diverse domains. The focus is on regularizing deep learning mechanisms by exploiting these features to enhance training efficiency and accuracy.
We anticipate that this collection will not only contribute significantly to the field of cognitive systems but also foster innovation by inspiring new methodologies and approaches. By elucidating the complexities of spatio-temporal datasets, these contributions will pave the way for more effective and transparent intelligent systems, thus advancing the realm of artificial intelligence.
Topics are as below but are not limited to the following methods with explainability in focus and applications supporting spatiotemporal and spectral features extraction:
● Recurrent learning networks
● Temporal association networks
● Temporal causal convolutional networks
● Sequence prediction
● Sequence characterization
● Attention mechanisms.
● Spatial localization
● Saliency and relevance maps
● Encoding of dynamic systems
● Sparse encoding, on-line learning
● Real-time learning
● Autoencoders
● Transformers
● Intelligent system applications of Deep learning
● Spiking Neural Networks
● Multimodal classifiers
● Logic-based classifiers
● Hybrid approaches enhancing explainability
Keywords:
Explainability in artificial intelligence, High-dimensional patterns, Brain-computer interfacing, Neuroimaging applications, Automation systems, IoT sensors, Computer vision-based systems, Deep learning models, Spatial and temporal relationships
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
This Research Topic aims to delve into the cutting-edge advancements in artificial intelligence, focusing on the explainability of patterns in high-dimensions, ranging from brain-computer interfacing and neuroimaging applications to automation systems, IoT sensors, and computer vision-based systems, operating by learning intricate patterns from diverse datasets. These patterns are essential for informed decision-making. However, to harness these patterns effectively, algorithms must decipher complex spatial and temporal relationships within the data. This challenge is magnified by the presence of noise, non-stationarity, non-linearity, and autocorrelation, as well as sparse and autoregressive relationships. Traditional machine learning (ML) methods, which often relied on costly feature engineering, struggled to cope with the curse of dimensionality. This themed article collection will explore recent developments, methodologies, and applications in these areas, shedding light on emerging trends and innovative solutions.
In recent years, deep learning models such as transformers and autoencoders have emerged as promising solutions for unravelling intricate relationships within dynamic systems. However, these models come not only with significant computational requirements but also with little auditability. The computational requirements lead researchers to explore compressed encoding schemes and methods to regulate the growth of loss functions. One innovative approach involves employing characteristic temporal and spatial attention features to filter relevant information. This not only reduces computational and memory needs but also enhances learning performance in subsequent layers of deep networks. These enhancements are pivotal for the development of online learning networks and the creation of futuristic real-time intelligent systems capable of handling vast amounts of big data. Little auditability leads researchers to strive to enhance explainability through saliency maps, gradient maps, and layer-wise relevance propagation maps. These techniques shed light on making rational connections between the feature components and the classifier’s performance, contributing to intelligent systems to assist jobs where the responsibility of decisions matters.
The primary aim of this collection is to showcase a diverse yet complementary array of contributions that highlight new developments and applications in deep learning and hybrid machine learning. These advancements are specifically tailored to tackle challenges posed by spatio-temporal datasets in various applications mentioned earlier. We welcome authors to submit original and unpublished works that leverage spatial and temporal features from data with contextual temporal information across diverse domains. The focus is on regularizing deep learning mechanisms by exploiting these features to enhance training efficiency and accuracy.
We anticipate that this collection will not only contribute significantly to the field of cognitive systems but also foster innovation by inspiring new methodologies and approaches. By elucidating the complexities of spatio-temporal datasets, these contributions will pave the way for more effective and transparent intelligent systems, thus advancing the realm of artificial intelligence.
Topics are as below but are not limited to the following methods with explainability in focus and applications supporting spatiotemporal and spectral features extraction:
● Recurrent learning networks
● Temporal association networks
● Temporal causal convolutional networks
● Sequence prediction
● Sequence characterization
● Attention mechanisms.
● Spatial localization
● Saliency and relevance maps
● Encoding of dynamic systems
● Sparse encoding, on-line learning
● Real-time learning
● Autoencoders
● Transformers
● Intelligent system applications of Deep learning
● Spiking Neural Networks
● Multimodal classifiers
● Logic-based classifiers
● Hybrid approaches enhancing explainability
Keywords:
Explainability in artificial intelligence, High-dimensional patterns, Brain-computer interfacing, Neuroimaging applications, Automation systems, IoT sensors, Computer vision-based systems, Deep learning models, Spatial and temporal relationships
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.