As machine learning (ML) and Deep learning (DL) progresses from purely theoretical models to real implementations, the issue of ML safety and fairness becomes increasingly important. In neuroscience, machine learning has demonstrated increasing proficiency, even outperforming humans in some tasks. With the abundance of neuroscience data becoming available in the big data age, ML will be able to attain even greater efficacy in various activities by using it.
However, in addition to the capabilities, the inherent safety and fairness challenges linked with ML have far-reaching consequences. Because ML models are algorithms trained on existing data, they frequently carry the biases of past experience. Even the most well-intentioned ML model can spread the biases existing in the training data in the absence of monitoring and surveillance tools. Furthermore, weaknesses in ML models may be easily exploited by attackers, either intentionally or unintentionally, to generate undesirable outcomes or worsen power inequalities.
Neuroscience is the study of how the brain performs diverse perceptual, cognitive, and motor processes. Big data can now be processed by intelligent computers using ML-based artificial intelligence approaches, opening up new possibilities for neuroscience such as how millions of neurons and nodes cooperate to manage vast amounts of information and how the brain develops and governs actions. Because of the safety and fairness issues with ML models, researchers frequently underestimate the concerns of data-fying neuroscience. The attention has been called to a far more significant and fundamental problem: the safety and fairness issue in ML models exacerbates imbalances and risk, with particularly severe effects for neuroscience.
This Research Topic welcomes Original Research and Review articles that discuss the fairness and safety implications of using ML in real-world neuroscience systems, propose methods to detect, prevent, and/or alleviate undesired fairness and safety issues that ML-based systems may exhibit, analyze the vulnerability of neuroscience ML systems to adversarial attacks and possible defense mechanisms, and, more broadly, any paper that stimulates progress on the topic.
Potential topics include but are not limited to the following:
-Application of of ML/DL in neuroscienc
-Measurement of safety in neuroscience
-Understanding disparities in the predicted outcome
-Construction of unbiased ML models for neuroscience
-Recourse and contestability of biased ML results
-Safe reinforcement learning in neuroscience
-Ethical and legal consequences of using ML in real-world systems
As machine learning (ML) and Deep learning (DL) progresses from purely theoretical models to real implementations, the issue of ML safety and fairness becomes increasingly important. In neuroscience, machine learning has demonstrated increasing proficiency, even outperforming humans in some tasks. With the abundance of neuroscience data becoming available in the big data age, ML will be able to attain even greater efficacy in various activities by using it.
However, in addition to the capabilities, the inherent safety and fairness challenges linked with ML have far-reaching consequences. Because ML models are algorithms trained on existing data, they frequently carry the biases of past experience. Even the most well-intentioned ML model can spread the biases existing in the training data in the absence of monitoring and surveillance tools. Furthermore, weaknesses in ML models may be easily exploited by attackers, either intentionally or unintentionally, to generate undesirable outcomes or worsen power inequalities.
Neuroscience is the study of how the brain performs diverse perceptual, cognitive, and motor processes. Big data can now be processed by intelligent computers using ML-based artificial intelligence approaches, opening up new possibilities for neuroscience such as how millions of neurons and nodes cooperate to manage vast amounts of information and how the brain develops and governs actions. Because of the safety and fairness issues with ML models, researchers frequently underestimate the concerns of data-fying neuroscience. The attention has been called to a far more significant and fundamental problem: the safety and fairness issue in ML models exacerbates imbalances and risk, with particularly severe effects for neuroscience.
This Research Topic welcomes Original Research and Review articles that discuss the fairness and safety implications of using ML in real-world neuroscience systems, propose methods to detect, prevent, and/or alleviate undesired fairness and safety issues that ML-based systems may exhibit, analyze the vulnerability of neuroscience ML systems to adversarial attacks and possible defense mechanisms, and, more broadly, any paper that stimulates progress on the topic.
Potential topics include but are not limited to the following:
-Application of of ML/DL in neuroscienc
-Measurement of safety in neuroscience
-Understanding disparities in the predicted outcome
-Construction of unbiased ML models for neuroscience
-Recourse and contestability of biased ML results
-Safe reinforcement learning in neuroscience
-Ethical and legal consequences of using ML in real-world systems