Background: The application of Artificial Intelligence (AI) has experienced an upsurge in almost all scientific disciplines. From physics and medicine to social sciences and biology, scientists have been increasingly applying advanced machine learning and deep learning techniques to solve problems that seemed unsolvable for a long time. As an example, a more than fifty-year-old scientific challenge of predicting the 3-dimensional protein conformation from its primary amino acid sequence was recently strongly advanced applying deep learning. The AlphaFold 2 team, responsible for solving this challenge, had already worked on this problem before and now managed to greatly improve the performance of their predictive model by applying innovative AI techniques (Tunyasuvunakool et al., 2021) and shared their work openly. This example shows how AI is revolutionizing not only the way we solve scientific problems, but also the way we collaborate. More researchers than ever are using open-source software for computations and to write algorithms. Increasingly, this work is shared via publicly available repositories.
In this research topic we address the way forward to better predict toxicity of compounds in the human risk context. The aim of the topic is to collect practical examples of AI applications to solve complex toxicological problems. Especially, predicting risk related to compound exposure, has been a field where classically scientists and regulators have been relying heavily on animal experiments. Although progress has been made to replace some of the mandatory animal tests by in-vitro test batteries, currently no AI-driven approach has been accepted as a valid risk assessment approach in Europe or the US.
Goal: Although computational toxicology has been around for quite some time, efforts have been focused on employing more classical prediction methods such as physiology-based pharmacokinetics (PBPK), quantitative structure-activity relationship (QSAR), and differential equation modeling. Only recently, machine learning and deep learning techniques have found their way into predicting the hazard of chemicals for humans. Obviously, there are many examples where techniques such as PBPK and QSAR can be enriched by using machine learning, leading to better predictions and the possibility to integrate data sources or just use more data. In terms of establishing an AI that can confidently predict hazard AND toxicological risk for humans, we are not there yet. Considering the massive amount of progress in the field of AI over the past five years alone, we think that now the time is ripe for establishing a true probabilistic risk assessment framework, incorporating everything that AI has to offer. This will lead not only to a reduction of the animals used for risk assessment, but it will also lead to a better understanding of how compounds cause effects and to better predictions for human health in general.
Scope and Details for Authors: The traditional approaches such as animal tests and in vitro test batteries follow a fundamentally different way of evaluating risk, than AI methods would. Emerging techniques in natural language processing, semantic interoperability, machine learning and deep learning also offer a way to connect these approaches. The goal of this Research Topic is to showcase novel AI-driven methods and tools for integrating in vivo and in vitro methods with machine learning, or AI-only approaches, that can lead to the establishment of a probabilistic risk assessment framework.
Papers submitted to this Research Topic should demonstrate and/or highlight the potential of the approach to be adapted or integrated into risk assessment for compound safety in humans. We especially invite contributions that address integrative approaches addressing multiple aspects of risk assessment simultaneously. Research contributions showcasing AI approaches where e.g. chemical information is integrated with historical animal data, in vitro and omics data in a read across approach are highly valued.
This Research Topic will include both review papers and original research papers. Review papers should summarize the relevant existing literature and highlight overarching trends to date, as well as future directions. Original research papers should utilize and/or integrate in vivo, in vitro and machine learning approaches. Original papers that ONLY use either in vitro/in vivo data or machine learning approaches will not be considered.
We encourage original papers that present novel methods or computational tools, combine existing (i.e., published) models and/or approaches, or deploy existing models and/or approaches. The authors should address how data was collected and how models could be used in probabilistic risk assessment context. We also would like to invite more technical contributions that aim to educate the audience on performing machine learning tasks. Contributions, with an educational scope should demonstrate a case example, using programming code and should be written in tutorial style. Materials in relation to those tutorial-type papers should be freely available and reusable under a permissive license. An educational contribution does not necessarily need to be a code demo. It can also be a showcase for a good-practice, a workflow or process tool that the authors created, or a description of valuable data, collected, described and shared by the authors.
Authors should also describe their work preferably in adherence to Open Science principles. Code used to establish machine learning models should preferentially be shared openly in a git repository and should be clearly documented.
Tunyasuvunakool, K., Adler, J., Wu, Z., et al. 2021.
Highly accurate protein structure prediction for the human proteome. Nature, 596(7873), 590–596.