Over the last decade, deep learning has made significant strides in most AI tasks, including generating accurate text-to-image models. However, the ability of large deep learning models to address neuroscience problems remains a subject of debate. The small-scale nature of neuroscience databases and the limitations of single-modal data in reflecting real- world conditions call for more research on multimodal perceiving in AI for sciences. This research should encompass visual signals, images, EEG, ECG, EOG, and EMG, and tackle issues related to affective computing, rehabilitation analysis, mental disorder evaluation, emotion recognition, and cardiovascular disease diagnosis.
In this research topic, the collection of articles is intended to cover recent original research works to advance the fundamental theory and technologies in affective computing, biomedical signal processing, multimodal fusion algorithms, and biomedical signal-based clinical applications.
- Techniques for processing biomedical data.
- Algorithms that combine machine learning/deep learning with multiple types of data, such as images and physiological signals.
- Algorithms that can be learned from both labeled and unlabeled data in multimodal fusion applications.
- Practical applications of biomedical signal processing, such as diagnosis and treatment of medical conditions.
- Methods for evaluating the performance of multimodal fusion algorithms.
- Combining multiple types of signals to create interfaces between humans and machines.
- Using multimodal fusion algorithms in biomedical applications, such as monitoring emotions and mental health using visual and physiological signals.
Over the last decade, deep learning has made significant strides in most AI tasks, including generating accurate text-to-image models. However, the ability of large deep learning models to address neuroscience problems remains a subject of debate. The small-scale nature of neuroscience databases and the limitations of single-modal data in reflecting real- world conditions call for more research on multimodal perceiving in AI for sciences. This research should encompass visual signals, images, EEG, ECG, EOG, and EMG, and tackle issues related to affective computing, rehabilitation analysis, mental disorder evaluation, emotion recognition, and cardiovascular disease diagnosis.
In this research topic, the collection of articles is intended to cover recent original research works to advance the fundamental theory and technologies in affective computing, biomedical signal processing, multimodal fusion algorithms, and biomedical signal-based clinical applications.
- Techniques for processing biomedical data.
- Algorithms that combine machine learning/deep learning with multiple types of data, such as images and physiological signals.
- Algorithms that can be learned from both labeled and unlabeled data in multimodal fusion applications.
- Practical applications of biomedical signal processing, such as diagnosis and treatment of medical conditions.
- Methods for evaluating the performance of multimodal fusion algorithms.
- Combining multiple types of signals to create interfaces between humans and machines.
- Using multimodal fusion algorithms in biomedical applications, such as monitoring emotions and mental health using visual and physiological signals.