Skip to main content

PERSPECTIVE article

Front. Pharmacol., 02 August 2024
Sec. Translational Pharmacology

Artificial intelligence integration in the drug lifecycle and in regulatory science: policy implications, challenges and opportunities

  • 1Agence Nationale de Sécurité des Médicaments et des Produits de Santé (ANSM) Saint-Denis, Saint-Denis, France
  • 2INSERM, Laboratoire d'Informatique Médicale et d'Ingénierie des Connaissances en e-Santé, LIMICS, Sorbonne Université, Paris, France
  • 3France Assoc Santé, Paris, France
  • 4Faculty of Pharmacy of Lisbon University, Lisbon, Portugal
  • 5CHRC – Comprehensive Health Research Center, Evora, Portugal
  • 6EA 7379, Faculté de Santé, Université Paris-Est Créteil, Créteil, France
  • 7CHI Créteil, Créteil, France
  • 8Université de Versailles St Quentin-Paris Saclay, Inserm U1018, Guyancourt, France

Artificial intelligence tools promise transformative impacts in drug development. Regulatory agencies face challenges in integrating AI while ensuring reliability and safety in clinical trial approvals, drug marketing authorizations, and post-market surveillance. Incorporating these technologies into the existing regulatory framework and agency practices poses notable challenges, particularly in evaluating the data and models employed for these purposes. Rapid adaptation of regulations and internal processes is essential for agencies to keep pace with innovation, though achieving this requires collective stakeholder collaboration. This article thus delves into the need for adaptations of regulations throughout the drug development lifecycle, as well as the utilization of AI within internal processes of medicine agencies.

Introduction

The healthcare landscape has recently witnessed a proliferation of AI applications, many of which have found practical implementation through medical devices. These applications span various medical specialties, including radiology (Samala et al., 2016), dermatology (Esteva et al., 2017), ophthalmology (Abràmoff et al., 2018), pathology (Litjens et al., 2016), genome interpretation (Kamps et al., 2017), biomarker discovery (Diaz-Uriarte et al., 2022), and drug shortage studies (Pall et al., 2023). It is worthy to note, however, that for applications like radiology, for instance, the use of AI is still far from routine, and needs dedicated teams and skills (Shelmerdine et al., 2024). Furthermore, AI is making new inroads into clinical trial processes (EMA, 2022), with the recent milestone of the first wholly AI-designed drug (Chace, 2024). Although still in its nascent stages, the theoretical potential of AI in pharmaceutical product development is vast, spanning from rational drug design and decision-making support to personalized medication and clinical data management (Duch et al., 2007; Blasiak et al., 2020; D Amico et al., 2023). Consequently, AI tools and applications are poised to play an increasingly pivotal role across all stages of the drug lifecycle, including drug discovery, manufacturing, nonclinical testing, clinical research, and surveillance (Harrer et al., 2019; Gupta et al., 2021; Hauben and Hartford, 2021; Kang et al., 2023). This review elucidates the profound regulatory implications of AI’s existing or potential involvement in pharmaceutical product development at every stage of the drug lifecycle, particularly in relation to the body of evidence utilized for clinical trials and marketing authorization. As regulatory agencies are tasked with ensuring the quality, safety, and efficacy of medicinal drugs and are at the forefront of assessing these evolving methodologies, the overarching aim of this paper is to comprehensively explore the potential spectrum of AI applications in drug-related regulatory science with proposals for actionable regulatory recommendations. Additionally, this paper reviews the potential of AI to enhance and optimize regulatory processes at regulatory agencies concerning drug assessment, authorization, and post-authorization surveillance.

We will first give an overview of existing or potential AI applications in the drug lifecycle, with step-specific questions about the data and models used and the corresponding regulatory challenges and policy implications. In a second part, we will propose regulatory recommendations or adaptations that may be required to meet those challenges. In a third part, we will show how AI may help optimize and expedite internal regulatory agencies’ processes, to the benefit of patients. We hope that this perspective will contribute to accelerating relevant future regulatory adaptations and understanding among all stakeholders in the field of AI use in the drug lifecycle.

Policy implications regarding stepwise AI applications in the drug lifecycle

The potential uses of AI are outlined here across different phases of the drug life cycle, from drug discovery to clinical trials and post-authorization activities.

AI algorithms are widely applied for drug discovery (Burki, 2019; Vamathevan et al., 2019). Quantitative structure-activity/property relationship (QSAR/QSPR), as well as structure-based modeling, new molecule design, and synthesis prediction, may be addressed by AI (Jiménez-Luna et al., 2021; Paul et al., 2021; Vora et al., 2023). Computational methods have been used for a long time for ligand-binding probability calculations (Fujita and Winkler, 2016) and for ADMET (absorption, distribution, metabolism, and toxicity) prediction (Norinder and Bergström, 2006; Beck and Geppert, 2014). Several pharmaceutical companies are currently working with AI organizations (such as companies and research laboratories) along different lines (Paul et al., 2021). Recently, the first wholly AI-designed drug entered clinical trials (Chace, 2024). During the development of this new drug, TRAF2- and NCK-interacting kinase (TNIK) was first identified as an anti-fibrotic target using a predictive artificial intelligence (AI) approach (using PandaOmics (Kamya et al., 2024)). Then, using generative AI [Chemistry42 (Ivanenkov et al., 2023)], a small-molecule TNIK inhibitor was designed (Ren et al., 2024). This drug entered two phase I studies in 18 months, from target discovery to preclinical candidate, including traditional testing in animal models, which is a very short timeline. Regarding timelines and costs, it is usually around 5.5–14.5 years (or more for target discovery) without the AI approach to reach the preclinical stage. In terms of costs, the traditional approach costs around 674 million dollars for a preclinical candidate, whereas it is much lower with the AI approach (Pun et al., 2024). The application of AI in drug screening could reduce R&D costs by 50% while increasing efficiency and accuracy (Wang et al., 2019). As another example of a state-of-the-art recent AI application for drug discovery, AlphaFold allows predicting protein structures at the atomic level, potentially accelerating drug discovery in cancer research (Abramson et al., 2024; Xu et al., 2024). Nevertheless, even if all these technologies and their potentials seem impressive, most are at preliminary stages, there are few success stories, and it still remains to be determined if AI will really perform better and faster to develop more and more new successful drug candidates (Schneider et al., 2020; Bender and Cortés-Ciriano, 2021). Moreover, till now, in the cases reviewed above, preclinical validation was carried out in traditional animal models.

In addition to potentially helping predict toxicity of drug candidates, AI approaches in preclinical testing can contribute to replacing, reducing, and refining the use of animals (Luechtefeld et al., 2018). This second incentive is quite powerful. As in drug discovery, large amounts of toxicological data already exist and can be used to construct AI tools that are relevant for toxicity prediction (Mayr et al., 2016; Luechtefeld et al., 2018; Lysenko et al., 2018; Basile et al., 2019; Wu et al., 2021). Non-animal approaches (such as QSAR, read-across, PB/PK, metabolomics, and cell painting, to cite just a few) rely as well on big toxicological, biological, and chemical data (Bray et al., 2016; Luechtefeld et al., 2018; Liu et al., 2023), for which quality should be thoroughly checked and ensured before training any prediction model, given that new kinds of toxicity cannot always be derived from previously learned ones (reliance solely on historical toxicology data might not be sufficient in several cases).

In the future, AI tools might be used for improving clinical trials with digital twins and optimizing the control arms (EMA, 2022; Fountzilas et al., 2022; Askin et al., 2023). They might help in patient selection and monitoring (eligibility, suitability, motivation, empowerment, adherence, and retention), thereby increasing clinical trials’ success rates (Harrer et al., 2019). They could also participate in designing more relevant trials, especially for precision medicine (for a review, see (Fountzilas et al., 2022)). Patient selection is the area where AI could be most used, followed by trial design (two times less) and analysis (three times less) (Askin et al., 2023). Overall, it is the mass and diversity of data that AI can process that could make the difference. Biomedical data from different origins (such as health insurance medical records, hospitals, genomics, biobanks, and radiology) may indeed be used to improve the enrolment and the design and follow-up of clinical trials (Acosta et al., 2022). It is also used to generate synthetic clinical data (synthetic patients) for accelerating precision medicine, increasing the coverage of the population involved in the clinical trial (Yu et al., 2018; EMA, 2022).

AI may be used to improve quality-by-design approaches (Rantanen and Khinast, 2015; Manzano et al., 2021). This includes tools to deal with the interpretation of experimental big data from various sources, such as real-time process control and real-time quality assurance (Hussain et al., 1991; Takayama et al., 1999; Rantanen and Khinast, 2015).

Pharmacovigilance (PV) is a data-driven field because it necessitates the gathering, processing, and analysis of significant amounts of data from a variety of very different sources (Carbonell et al., 2015). Here, AI techniques may be used for signal detection, data intake, or analysis (Hauben and Hartford, 2021). In practice, it is used and recommended mostly for signal detection and processing before data intake (Ball and Dal Pan, 2022; Martin et al., 2022). Industrials have reported the performance of several AI systems for signal detection and adverse event processing (Schmider et al., 2019; Routray et al., 2020). One study showed that the use of safety database data fields with dedicated AI applications (artificial intelligence and robotic process automation) as a surrogate for otherwise time-consuming and costly direct annotation of source documents is viable and feasible (Schmider et al., 2019). An example of an augmented AI system with a neural network approach used for an accurate and scalable solution for pharmacovigilance determination of adverse event seriousness in spontaneous, solicited, and medical literature reports was published (Routray et al., 2020). Data from a wide variety of sources can theoretically be used, including real world data such as electronic healthcare records (EHR) or social media (Comfort et al., 2018; Ball and Dal Pan, 2022; Actualité, 2024). AI can also be used for finer drug misuse detection (Afshar et al., 2022).

Actionable recommendations

Regulatory agencies and stakeholders’ information needs- transparency & explainability

In the fast-paced world of drug development, transparency is a cornerstone of trust and accountability (Transparency, 2024). When it comes to the application of artificial intelligence (AI), transparency becomes even more crucial (Crossnohere et al., 2022). In this respect, it is of utmost importance that stakeholders – especially regulators –, have access to clear information about the AI models driving drug development. Goals, data used, intended applications, advantages, and drawbacks of AI models should be clear so that everyone understands how they fit into each specific drug development. Regulators need this level of transparency and explainability to assess accuracy, precision, limitations, and uncertainties effectively (Hicks et al., 2022). One of the answers is therefore explainable AI (Alizadehsani et al., 2024). This is a recent discipline by itself (xAI). Several mathematical techniques are used to render AI methods and results more easy to understand (reviewed in (Holzinger et al., 2022)). In this respect, techniques like SHapley Additive exPlanations (SHAP) and, Local Interpretable Model-agnostic Explanations (LIME), Integrated Gradients, and Counterfactual Explanations offer windows into the black box of AI decision-making, providing clarity on the processes behind the algorithms (Mertes et al., 2022; Kırboğa et al., 2023; Wang et al., 2024). And transparency doesn't end once the model is built. Since AI models may evolve, regulators need to stay in the loop on updates and changes, ensuring ongoing monitoring of their performance and impact.

In summary, transparency is about empowering regulatory agencies to acquire all the information they need to make informed decisions. This is the first condition for regulators to be able to assess AI use in drug development. They nevertheless have, of course also to take further actions to keep on ensuring the safety of patients treated with drugs in which AI has been used during one or several steps of their development. Since data, models and applications utilized when applying AI tools depend on the drug lifecycle step (Figure 1A), we propose here stepwise regulatory actions or adaptations. We also show how AI may help optimize and expedite internal regulatory agencies’ processes, simplify review timelines, and improve efficiency while maintaining the highest safety standards (Figure 1B).

Figure 1
www.frontiersin.org

Figure 1. Domains of potential or existing use of AI in the drug lifecycle and for internal regulatory agencies’ processes. (A) Already existing or potential AI uses at each step of the drug lifecycle from drug discovery to post-market surveillance which are currently developed by researchers from academia and industry. (B) Examples of AI applications existing or in development in medicine agencies (in clockwise order of complexity, starting from Integration of big data from various sources and file formats in databases with automated annotations). These applications have the potential of enhancing and streamlining internal agencies’ processes. They are developed collaborating with expert AI research groups.

Challenges and corresponding proposals in adjusting to AI’s use across the drug lifecycle

For the whole drug lifecycle, EMA suggests a risk-based approach so that developers preemptively and proactively establish the risks that need to be monitored and/or mitigated (EMA, 2023a). The FDA and the MHRA address mainly the use of AI in medical devices and for digital health technologies (sensors, wearables, etc.) (federalregister, 2023; MHRA, 2024b; MHRA, 2024a). Different centers within the FDA (CBER, CDER, CDRH, and OCP) also collaborate to leverage AI and other advanced technologies to enhance the regulation of medical products (FDA, 2024b). Overall, up to now, no regulatory recommendation proposal for drug development has been published. Stepwise (drug discovery, non-clinical and toxicity, translational and clinical research, pharmaceutical manufacturing, and pharmacovigilance), specific regulatory challenges are therefore delineated here, together with points to consider and possible future adaptations, which are presented in Table 1.

Table 1
www.frontiersin.org

Table 1. Regulator’s considerations and possible regulatory actions by subject of potential interest in the different steps of the drug lifecycle.

In the race for innovative therapies, AI emerges as a powerful ally in drug discovery. Only one wholly AI-designed drug has entered clinical trials thus far (Generative Artificial Intelligence for Drug Discovery, 2024), and regulatory agencies are not mandated to assess the methodologies used unless they contribute to the overall body of evidence. However, as AI models become integral to drug design, a dialogue between regulators and developers becomes imperative to ensure transparency and understanding of model performances as regards their predictions’ accuracy and reproducibility (Table 1 A). Additionally, AI holds promise for accelerating drug repurposing efforts, leveraging big data analysis to identify new medical indications for existing drugs with unprecedented speed and precision (Zong et al., 2022).

In regulatory science, and specifically in non-clinical testing and toxicity prediction, AI tools have great potential to predict safety outcomes, but their suitability remains to be determined. First of all, these tools offer a promising avenue to potentially reduce or even replace the traditional reliance on animal testing, which is a powerful incentive (EMA, 2023a). Notably, the FDA Modernization Act 2.0 in the United States takes a stride forward by curbing the mandatory use of animal models for toxicity predictions (Han, 2023). AI non-clinical models draw from a rich diversity of data sources—from in vitro and in vivo experiments to expansive databases—employing diverse algorithms and machine learning techniques (Maertens et al., 2022). Toxicity predictions generated using AI (machine learning on relevant biological, chemical, or toxicological data) are inherently probabilistic and contingent upon the quality and quantity of the input data, but they have great potential (Mayr et al., 2016; Wu et al., 2021; Maertens et al., 2022). However, rigorous assessment of the data and models used and potential adjustments to regulatory frameworks will be necessary in the long run (Table 1 B) (Paul et al., 2021). Several efforts have been made or are underway to curate and reliably annotate toxicological databases (Lea et al., 2017; Nair et al., 2020; Wu et al., 2023).

In translational and clinical research, several regulatory projects regarding AI use are currently led by the FDA and EMA (EMA, 2023a; FDA, 2024c), underscoring the burgeoning potential of AI in these domains. During the COVID-19 pandemic, AI played a crucial role in accelerating vaccine trials. Companies like Moderna and Pfizer used AI to design trials, monitor patient data, and streamline regulatory submissions. AI tools helped identify suitable trial participants more quickly, designed adaptive trial protocols that adjusted in real-time based on interim results, and monitored adverse events to ensure participant safety. This use of AI contributed to the unprecedented speed at which COVID-19 vaccines were developed and approved (reviewed in (Sharma et al., 2022)). However, the current absence of regulations in this domain raises pertinent questions, highlighting the pressing need for new oversight (Arora and Arora, 2022). Take, for instance, the digitization of clinical trials—an innovative approach leveraging data from electronic health records (EHR), routine medical exams, and various diagnostic tests. This digital transformation not only streamlines patient selection but also opens doors for broader trial participation. Yet, navigating the complexities of data management in these trials necessitates transparency in AI algorithms (Kasahara et al., 2024), and several open questions remain (Table 1 C).

In drug manufacturing, AI tools are also revolutionizing various aspects, from process design and scaling up to advanced control and fault detection. Both the FDA and EMA are actively crafting recommendations in this domain (EMA, 2023a; FDA, 2024a). While the full extent of AI’s impact is yet to be realized (consultations are ongoing), it’s evident that the field is rapidly expanding (Table 1 D). Given that these techniques primarily originate in industrial sectors, fostering closer collaboration between manufacturers and regulators is imperative. Notably, the real-time application of these methods on the factory floor poses unique challenges, necessitating robust regulatory frameworks and onsite inspections for compliance.

In pharmacovigilance, AI is gaining traction as a potent tool for enhancing drug safety monitoring. The EMA’s reflection paper acknowledges its significance, while the FDA’s discussion paper delineates its role across case processing, evaluation, and automated submissions prior to individual safety report submissions (EMA, 2023a; FDA, 2024c). In pharmacovigilance, regulators already take advantage of AI techniques to better deal with big data from various sources. Pharmacovigilance is therefore the field in which the use of AI is now most mastered and is currently used by regulatory agencies (Martin et al., 2022; Routray et al., 2020; Actualité, 2024), which may and should establish collaborations with academic research laboratories to use AI for specific projects with low-level or early detection signals.

More specifically, AI is used here for improving data analysis from institutional databases (World Health Organization’s Vigibase, EMA’s Eudravigilance, FDA’s Adverse Event Reporting System (FAERS), etc.) by developing AI algorithms better than the classic statistical ones. AI may also be used for improving data quality in databases (symbolic AI), allowing better groupings before analysis, increasing the number of cases by developing AI tools to collect more data from physicians or patients; or using other sources (EHR or social media) (Table 1 E).

Another key aspect is that as the landscape of drug development evolves, it’s becoming increasingly clear that regulatory agencies will need to bolster their expertise in AI. This demand for AI-specific skills varies depending on the stage of drug development, so specific stepwise upskilling of assessors will be required. Indeed, the evaluation of the specific data, applications, and models used will be needed for preparing corresponding scientific assessment reports (see specific data sources and applications in Table 1).

Seizing AI opportunities: optimizing regulatory processes

Regulatory agencies have to deal with amounts and sources of data that are increasingly diverse and massive (raw data reports, real-world data, images, tables, EHR, etc.). Beside the drug lifecycle, AI applications would also increasingly find their place in regulatory assessments (Figure 1B). Recent published advances come from the EMA (Jornet, 2024). First, natural language processing and optical character recognition tools may be used to annotate, extract, and categorize relevant data from various sources submitted to these agencies (files for clinical trials and marketing authorizations, including text, tables, and images). The output will be implemented in AI-amenable databases. An application could then assess the contents and notify the relevant assessors. This would save time, improve reproducibility, and reduce errors (sparing humans low-added value and repetitive tasks). EMA uses AI to support the validation of variations by flagging missing documents, detecting dissimilarities, and automatically identifying changes. AI tools can also find personal data, compare documents, do triage, and perform automated literature reviews. At the European level, there are also several other projects aimed at challenging and furthering AI use in a regulatory setting. (EMA, 2023b; bundesgesundheitsministerium, 2024; Regulatorische, 2024). A new NLP approach for harmonization of European medicinal product information has also recently been published (Bergman et al., 2022).

In the regulatory setting, AI tools may be used to categorize and annotate texts from various sources and help implement progressively a collective memory to compare files, perform pre-analyses, and produce knowledge graphs (making comparisons easier). They could also, theoretically, help as regulatory assessment assistants in the near future.

Perspectives and conclusion

Today, it is quite difficult to gather accurate data on AI use in the lifecycle of drugs in published data except at the clinical trial stage. This shows that information about AI use for health topics or in health products is not readily and easily accessible. The first major factor for relevant assessment by regulators of these tools and applications is transparency, which goes with explainability using relevant tools [see above and (Lundberg and Lee, 2017)]. The second will be adapting current recommendations for developing new regulatory guidelines for AI use in the healthcare setting and collaborating with researchers, physicians, and the industry to improve the relevance of these guidelines. This could foster transparency, which regulators, the public, and health professionals demand (Vellido, 2019). Other factors that have to be considered are the inherent complexity of AI models and their “black-box” nature (Rudin, 2019) and concerns about data privacy and security (Price, 2017). These are the main factors that may question stakeholders, such as patients and the general public. The international conference on harmonization (ICH) has recently addressed the use of AI and modeling for some topics related to the quality of drugs (e.g., product dissolution and in vivo/in vitro relationships/correlations, purge and fate of impurities, container/closure integrity, etc.) (ICH, 2021), which would impact the ICH M7 guideline. The ICH M15 concept paper, “Model-Informed Drug Development General Principles Guideline,” also considers future approaches such as machine learning and AI (M15, 2022). The ICH is therefore already considering the use of AI in drug development.

Applications of AI for internal processes at agencies also have great potential. To this end, there is important work to be done at regulatory agencies for selecting and validating the data incremented in the databases that will be useable to create the collective memory to be used for better and quicker assessments. In any case, this will require both collaboration with AI academic research laboratories, and acquiring internal competencies.

As we look into the future of drug regulation amidst the burgeoning era of AI, several critical questions subsist.

First of all, it remains to be determined what concrete steps can be taken to ensure that all stakeholders are involved (including patient associations in the larger frame of health democracy) in the utilization of AI tools in drug development. This health democracy setting will be important to better take into account potential ethical issues, such as potential patient selection and monitoring of clinical study biases. Another important point for the future will be to determine how regulatory bodies can navigate the complexities of AI models, while ensuring all corresponding ethical aspects. The ethical implications of using AI in drug development, include potential bias in AI models, informed consent, and patient autonomy. The use of AI may also raise privacy concerns like data privacy issues, data security, patient confidentiality, and compliance with regulations like GDPR. Potential solutions to these concerns are to design strategies like implementing robust data anonymization techniques, ensuring diverse and representative data sets to reduce bias, involving all stakeholders (including patient representatives), establishing transparent AI model validation processes (transparency, explainability), and adhering to local and international ethical guidelines and frameworks. Looking ahead, international collaboration among regulatory authorities will be instrumental in developing common responses and standards for evaluating AI technologies in pharmaceutical development. By harnessing the collective expertise and resources of global stakeholders, regulators should forge an adaptive framework that fosters transparency, innovation, and patient-centric outcomes.

Data availability statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

Author contributions

WO-G: Conceptualization, Methodology, Project administration, Supervision, Validation, Writing–original draft, Writing–review and editing. M-CJ: Conceptualization, Methodology, Validation, Writing–review and editing. J-PT: Validation, Writing–review and editing. SO-M: Validation, Writing–review and editing. LB: Investigation, Writing–review and editing. PM: Supervision, Validation, Writing–review and editing. JA: Supervision, Validation, Writing–review and editing.

Members of The Scientific Advisory Board of ANSM

Joël Ankri (Chairman), Janine Barbot, Robert Barouki, Éric Bellissant, Patrick Castel, Patrick Chaskiel, Nicolas Clere, Christiane Druml, Sofia de Oliveira Martins, François Eisinger, Éric Ezan, Catherine Gourlay-France, Jamila Hamdani, Walter Janssens, Marie-Christine Jaulent, Maria Emilia Monteiro, Sylvie Odent, Fred Paccaud, Dominique Pougheon, Vololona Rabeharisoa, Victoria Rollason, Valérie Sautou, Jean-Pierre Thierry, Jean-Paul Vernant.

Funding

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. All work was funded by ANSM.

Acknowledgments

The authors are very grateful to the following ANSM officers for their invaluable inputs: Malika Boussaid, Nicolas Delemer, Pierre Demolis, Vincent Gazin, David Morelle, Jean-Michel Race, Stéphane Vignot, and Mahmoud Zureik. We would also like to thank the following experts: Emmanuel Bacry, Arnaud Bayle, Catherine Duclos, Marco Fiorini, Julian Isla and Xavier Tannier.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Abràmoff, M. D., Lavin, P. T., Birch, M., Shah, N., and Folk, J. C. (2018). Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. Npj Digit. Med. 1 (1), 39. doi:10.1038/s41746-018-0040-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Abramson, J., Adler, J., Dunger, J., Evans, R., Green, T., Pritzel, A., et al. (2024). Accurate structure prediction of biomolecular interactions with AlphaFold 3. Nature 630 (8016), 493–500. doi:10.1038/s41586-024-07487-w

PubMed Abstract | CrossRef Full Text | Google Scholar

Acosta, J. N., Falcone, G. J., Rajpurkar, P., and Topol, E. J. (2022). Multimodal biomedical AI. Nat. Med. 28 (9), 1773–1784. PMID:36109635. doi:10.1038/s41591-022-01981-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Actualité (2024). Actualité - Lancement de la nouvelle application nationale de pharmacovigilance. ANSM. Available at: https://ansm.sante.fr/actualites/lancement-de-la-nouvelle-application-nationale-de-pharmacovigilance [Accessed November 3, 2023]

Google Scholar

Afshar, M., Sharma, B., Dligach, D., Oguss, M., Brown, R., Chhabra, N., et al. (2022). Development and multimodal validation of a substance misuse algorithm for referral to treatment using artificial intelligence (SMART-AI): a retrospective deep learning study. Lancet Digit. Health 4 (6), e426–e435. doi:10.1016/S2589-7500(22)00041-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Alizadehsani, R., Oyelere, S. S., Hussain, S., Jagatheesaperumal, S. K., Calixto, R. R., Rahouti, M., et al. (2024). Explainable artificial intelligence for drug discovery and development: a comprehensive survey. IEEE Access 12, 35796–35812. doi:10.1109/ACCESS.2024.3373195

CrossRef Full Text | Google Scholar

Arora, A., and Arora, A. (2022). Synthetic patient data in health care: a widening legal loophole. Lancet 399 (10335), 1601–1602. doi:10.1016/S0140-6736(22)00232-X

CrossRef Full Text | Google Scholar

Askin, S., Burkhalter, D., Calado, G., and El Dakrouni, S. (2023). Artificial Intelligence Applied to clinical trials: opportunities and challenges. Health Technol. 13 (2), 203–213. PMID:36923325. doi:10.1007/s12553-023-00738-2

CrossRef Full Text | Google Scholar

Ball, R., and Dal Pan, G. (2022). “Artificial intelligence” for pharmacovigilance: ready for prime time? Drug Saf. N. Z. 45 (5), 429–438. PMID:35579808. doi:10.1007/s40264-022-01157-4

CrossRef Full Text | Google Scholar

Basile, A. O., Yahi, A., and Tatonetti, N. P. (2019). Artificial intelligence for drug toxicity and safety. Trends Pharmacol. Sci. Engl. 40 (9), 624–635. PMID:31383376. doi:10.1016/j.tips.2019.07.005

CrossRef Full Text | Google Scholar

Beck, B., and Geppert, T. (2014). Industrial applications of in silico ADMET. J. Mol. Model. 20 (7), 2322. PMID:24972798. doi:10.1007/s00894-014-2322-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Bender, A., and Cortés-Ciriano, I. (2021). Artificial intelligence in drug discovery: what is realistic, what are illusions? Part 1: ways to make an impact, and why we are not there yet. Drug Discov. Today 26 (2), 511–524. PMID:33346134. doi:10.1016/j.drudis.2020.12.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Bergman, E., Sherwood, K., Forslund, M., Arlett, P., and Westman, G. (2022). A natural language processing approach towards harmonisation of European medicinal product information. PLOS ONE 17 (10), e0275386. doi:10.1371/journal.pone.0275386

PubMed Abstract | CrossRef Full Text | Google Scholar

Blasiak, A., Khong, J., and Kee, T. (2020). CURATE.AI: optimizing personalized medicine with artificial intelligence. SLAS Technol. 25 (2), 95–105. doi:10.1177/2472630319890316

PubMed Abstract | CrossRef Full Text | Google Scholar

Bray, M.-A., Singh, S., Han, H., Davis, C. T., Borgeson, B., Hartland, C., et al. (2016). Cell Painting, a high-content image-based assay for morphological profiling using multiplexed fluorescent dyes. Nat. Protoc. 11 (9), 1757–1774. doi:10.1038/nprot.2016.105

PubMed Abstract | CrossRef Full Text | Google Scholar

Bundesgesundheitsministerium (2024). Regulatorische Nutzung von Big Data-Strategien basierend auf OMICS-Daten zur effizienten Entwicklung, Zulassung und sicheren Anwendung von biologischen Arzneimitteln (RENUBIA). Available at: https://www.bundesgesundheitsministerium.de/ministerium/ressortforschung/handlungsfelder/digitalisierung/renubia (Accessed March 2, 2024).

Google Scholar

Burki, T. (2019). Pharma blockchains AI for drug development. Lancet 393 (10189), 2382. doi:10.1016/S0140-6736(19)31401-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Carbonell, P., Mayer, M. A., and Bravo, À. (2015). Exploring brand-name drug mentions on Twitter for pharmacovigilance. Stud. Health Technol. Inf. Neth. 210, 55–59. doi:10.3233/978-1-61499-512-8-55

CrossRef Full Text | Google Scholar

Chace, C. (2024). First wholly AI-developed drug enters phase 1 trials. Forbes. Available at: https://www.forbes.com/sites/calumchace/2022/02/25/first-wholly-ai-developed-drug-enters-phase-1-trials/ [Accessed July 25, 2023].

Google Scholar

Comfort, S., Perera, S., Hudson, Z., Dorrell, D., Meireis, S., Nagarajan, M., et al. (2018). Sorting through the safety data haystack: using machine learning to identify individual case safety reports in social-digital media. Drug Saf. N. Z. 41 (6), 579–590. PMID:29446035. doi:10.1007/s40264-018-0641-7

CrossRef Full Text | Google Scholar

Crossnohere, N. L., Elsaid, M., Paskett, J., Bose-Brill, S., and Bridges, J. F. P. (2022). Guidelines for artificial intelligence in medicine: literature review and content analysis of frameworks. J. Med. Internet Res. 24 (8), e36823. doi:10.2196/36823

PubMed Abstract | CrossRef Full Text | Google Scholar

D Amico, S., Dall’Olio, D., Sala, C., Dall’Olio, L., Sauta, E., Zampini, M., et al. (2023). Synthetic data generation by artificial intelligence to accelerate research and precision medicine in hematology. JCO Clin. Cancer Inf. 7 (7), e2300021. doi:10.1200/CCI.23.00021

CrossRef Full Text | Google Scholar

Diaz-Uriarte, R., Gómez de Lope, E., Giugno, R., Fröhlich, H., Nazarov, P. V., Nepomuceno-Chamorro, I. A., et al. (2022). Ten quick tips for biomarker discovery and validation analyses using machine learning. PLoS Comput. Biol. 18 (8), e1010357. PMID:35951526. doi:10.1371/journal.pcbi.1010357

PubMed Abstract | CrossRef Full Text | Google Scholar

Duch, W., Swaminathan, K., and Meller, J. (2007). Artificial intelligence approaches for rational drug design and discovery. Curr. Pharm. Des. 13 (14), 1497–1508. doi:10.2174/138161207780765954

PubMed Abstract | CrossRef Full Text | Google Scholar

EMA (2023a). Reflection paper on the use of artificial intelligence in lifecycle medicines. USA: Eur Med Agency. Available at: https://www.ema.europa.eu/en/news/reflection-paper-use-artificial-intelligence-lifecycle-medicines (Accessed July 25, 2023).

Google Scholar

Ema, H. M. A. (2023b). Clusters of excellence discussion paper. European commission.

Google Scholar

Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature 542 (7639), 115–118. doi:10.1038/nature21056

PubMed Abstract | CrossRef Full Text | Google Scholar

FDA (2024a). Discussion paper: artificial intelligence in drug manufacturing. FDA CDER Framew. Regul. Adv. Manuf. Eval. Available at: https://www.fda.gov/media/165743/download.

Google Scholar

FDA (2024b). Artificial intelligence & medical products: how CBER, CDER, CDRH, and OCP are working together. Available at: https://www.fda.gov/media/177030/download.

Google Scholar

FDA (2024c). Using artificial intelligence and machine learning in the development of drug and biological products. Available at: https://www.fda.gov/media/167973/.

Google Scholar

Federalregister Framework for the use of digital health technologies in drug and biological product development; availability. Fed. Regist. 2023. Available at: https://www.federalregister.gov/documents/2023/03/24/2023-06066/framework-for-the-use-of-digital-health-technologies-in-drug-and-biological-product-development [Accessed July 8, 2024].

Google Scholar

Fountzilas, E., Tsimberidou, A. M., Vo, H. H., and Kurzrock, R. (2022). Clinical trial design in the era of precision medicine. Genome Med. 14 (1), 101. PMID:36045401. doi:10.1186/s13073-022-01102-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Fujita, T., and Winkler, D. A. (2016). Understanding the roles of the “two QSARs.”. J. Chem. Inf. Model. 56 (2), 269–274. PMID:26754147. doi:10.1021/acs.jcim.5b00229

PubMed Abstract | CrossRef Full Text | Google Scholar

Generative Artificial Intelligence for Drug Discovery (2024). How the first AI-discovered and AI-designed drug progressed to phase 2 clinical testing | research communities by springer nature. Available at: https://communities.springernature.com/posts/draft-9b0556ab-aa4d-49fd-bf3c-9c1a621defff (Accessed May 22, 2024).

Google Scholar

Gupta, R., Srivastava, D., Sahu, M., Tiwari, S., Ambasta, R. K., and Kumar, P. (2021). Artificial intelligence to deep learning: machine intelligence approach for drug discovery. Mol. Divers 25 (3), 1315–1360. PMID:33844136. doi:10.1007/s11030-021-10217-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Han, J. J. (2023). FDA Modernization Act 2.0 allows for alternatives to animal testing. Artif. Organs 47 (3), 449–450. doi:10.1111/aor.14503

PubMed Abstract | CrossRef Full Text | Google Scholar

Harrer, S., Shah, P., Antony, B., and Hu, J. (2019). Artificial intelligence for clinical trial design. Trends Pharmacol. Sci. 40 (8), 577–591. PMID:31326235. doi:10.1016/j.tips.2019.05.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Hauben, M., and Hartford, C. G. (2021). Artificial intelligence in pharmacovigilance: scoping points to consider. Clin. Ther. U. S. 43 (2), 372–379. PMID:33478803. doi:10.1016/j.clinthera.2020.12.014

CrossRef Full Text | Google Scholar

Hicks, S. A., Strümke, I., Thambawita, V., Hammou, M., Riegler, M. A., Halvorsen, P., et al. (2022). On evaluation metrics for medical applications of artificial intelligence. Sci. Rep. 12 (1), 5979. doi:10.1038/s41598-022-09954-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Holzinger, A., Saranti, A., Molnar, C., Biecek, P., and Samek, W. (2022). “Explainable AI methods - a brief overview,” in XxAI - explain AI. Editors A. Holzinger, R. Goebel, R. Fong, T. Moon, K.-R. Müller, and W. Samek (Cham: Springer International Publishing), 13–38. doi:10.1007/978-3-031-04083-2_2

CrossRef Full Text | Google Scholar

Hussain, A. S., Yu, X., and Johnson, R. D. (1991). Application of neural computing in pharmaceutical product development. Pharm. Res. 08 (10), 1248–1252. doi:10.1023/A:1015843527138

CrossRef Full Text | Google Scholar

ICH (2021). FUTURE OPPORTUNITIES and MODERNIZATION OF ICH QUALITY GUIDELINES: IMPLEMENTATION OF THE ICH QUALITY VISION1 FROM THE ICH QUALITY REFLECTION PAPER ICH QUALITY DISCUSSION GROUP (2019-2021). Available at: https://database.ich.org/sites/default/files/ICH_QDG_Recommendation_2021_1012.pdf.

Google Scholar

Ivanenkov, Y. A., Polykovskiy, D., Bezrukov, D., Zagribelnyy, B., Aladinskiy, V., Kamya, P., et al. (2023). Chemistry42: an AI-driven platform for molecular design and optimization. J. Chem. Inf. Model. 63 (3), 695–701. doi:10.1021/acs.jcim.2c01191

PubMed Abstract | CrossRef Full Text | Google Scholar

Jiménez-Luna, J., Grisoni, F., Weskamp, N., and Schneider, G. (2021). Artificial intelligence in drug discovery: recent advances and future perspectives. Expert Opin. Drug Discov. 16 (9), 949–959. PMID:33779453. doi:10.1080/17460441.2021.1909567

PubMed Abstract | CrossRef Full Text | Google Scholar

Jornet, J. B. (2024). AI and digitalisation at EMA.

Google Scholar

Kamps, R., Brandão, R., Bosch, B., Paulussen, A., Xanthoulea, S., Blok, M., et al. (2017). Next-generation sequencing in oncology: genetic diagnosis, risk prediction and cancer classification. Int. J. Mol. Sci. 18 (2), 308. doi:10.3390/ijms18020308

PubMed Abstract | CrossRef Full Text | Google Scholar

Kamya, P., Ozerov, I. V., Pun, F. W., Tretina, K., Fokina, T., Chen, S., et al. (2024). PandaOmics: an AI-driven platform for therapeutic target and biomarker discovery. J. Chem. Inf. Model. 64 (10), 3961–3969. doi:10.1021/acs.jcim.3c01619

PubMed Abstract | CrossRef Full Text | Google Scholar

Kang, J., Chowdhry, A. K., Pugh, S. L., and Park, J. H. (2023). Integrating artificial intelligence and machine learning into cancer clinical trials. Semin. Radiat. Oncol. 33 (4), 386–394. doi:10.1016/j.semradonc.2023.06.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Kasahara, A., Mitchell, J., Yang, J., Cuomo, R. E., McMann, T. J., and Mackey, T. K. (2024). Digital technologies used in clinical trial recruitment and enrollment including application to trial diversity and inclusion: a systematic review. Digit. Health 10, 20552076241242390. doi:10.1177/20552076241242390

PubMed Abstract | CrossRef Full Text | Google Scholar

Kırboğa, K. K., Abbasi, S., and Küçüksille, E. U. (2023). Explainability and white box in drug discovery. Chem. Biol. Drug Des. 102 (1), 217–233. doi:10.1111/cbdd.14262

PubMed Abstract | CrossRef Full Text | Google Scholar

Lea, I. A., Gong, H., Paleja, A., Rashid, A., and Fostel, J. (2017). CEBS: a comprehensive annotated database of toxicological data. Nucleic Acids Res. 45 (D1), D964-D971–D971. doi:10.1093/nar/gkw1077

PubMed Abstract | CrossRef Full Text | Google Scholar

Litjens, G., Sánchez, C. I., Timofeeva, N., Hermsen, M., Nagtegaal, I., Kovacs, I., et al. (2016). Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Sci. Rep. 6 (1), 26286. doi:10.1038/srep26286

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, A., Seal, S., Yang, H., and Bender, A. (2023). Using chemical and biological data to predict drug toxicity. SLAS Discov. 28 (3), 53–64. doi:10.1016/j.slasd.2022.12.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Luechtefeld, T., Marsh, D., Rowlands, C., and Hartung, T. (2018). Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility. Toxicol. Sci. Off. J. Soc. Toxicol. 165 (1), 198–212. PMID:30007363. doi:10.1093/toxsci/kfy152

CrossRef Full Text | Google Scholar

Lundberg, S., and Lee, S.-I. (2017). A unified approach to interpreting model predictions. arXiv. doi:10.48550/ARXIV.1705.07874

CrossRef Full Text | Google Scholar

Lysenko, A., Sharma, A., Boroevich, K. A., and Tsunoda, T. (2018). An integrative machine learning approach for prediction of toxicity-related drug safety. Life Sci. Alliance 1 (6), e201800098. PMID:30515477. doi:10.26508/lsa.201800098

PubMed Abstract | CrossRef Full Text | Google Scholar

M15 (2022). Model-informed drug development general principles guideline. Regulator’s considerations possible Regul. actions by Subj. potential interest Differ. steps drug lifecycle.

Google Scholar

Maertens, A., Golden, E., Luechtefeld, T. H., Hoffmann, S., Tsaioun, K., and Hartung, T. (2022). Probabilistic risk assessment – the keystone for the future of toxicology. ALTEX 39 (1), 3–29. doi:10.14573/altex.2201081

PubMed Abstract | CrossRef Full Text | Google Scholar

Manzano, T., Fernàndez, C., Ruiz, T., and Richard, H. (2021). Artificial intelligence algorithm qualification: a quality by design approach to apply artificial intelligence in pharma. PDA J. Pharm. Sci. Technol. 75 (1), 100–118. PMID:32817323. doi:10.5731/pdajpst.2019.011338

PubMed Abstract | CrossRef Full Text | Google Scholar

Martin, G. L., Jouganous, J., Savidan, R., Bellec, A., Goehrs, C., Benkebil, M., et al. (2022). Validation of artificial intelligence to support the automatic coding of patient adverse drug reaction reports, using nationwide pharmacovigilance data. Drug Saf. N. Z. 45 (5), 535–548. PMID:35579816. doi:10.1007/s40264-022-01153-8

CrossRef Full Text | Google Scholar

Mayr, A., Klambauer, G., Unterthiner, T., and Hochreiter, S. (2016). DeepTox: toxicity prediction using deep learning. Front. Environ. Sci. 3. Available from. doi:10.3389/fenvs.2015.00080

CrossRef Full Text | Google Scholar

Mertes, S., Huber, T., Weitz, K., Heimerl, A., and André, E. (2022). GANterfactual—counterfactual Explanations for medical non-experts using generative adversarial learning. Front. Artif. Intell. 5, 825565. doi:10.3389/frai.2022.825565

PubMed Abstract | CrossRef Full Text | Google Scholar

MHRA (2024a). The emergence of artificial intelligence and machine learning algorithms in healthcare: recommendations to support governance and regulation - position paper. Available at: https://www.bsigroup.com/globalassets/localfiles/en-gb/about-bsi/nsb/innovation/mhra-ai-paper-2019.pdf.

Google Scholar

MHRA (2024b). Impact of AI on the regulation of medical products Implementing the AI White Paper principles. Available at: https://assets.publishing.service.gov.uk/media/662fce1e9e82181baa98a988/MHRA_Impact-of-AI-on-the-regulation-of-medical-products.pdf.

Google Scholar

Nair, S. K., Eeles, C., Ho, C., Beri, G., Yoo, E., Tkachuk, D., et al. (2020). ToxicoDB: an integrated database to mine and visualize large-scale toxicogenomic datasets. Nucleic Acids Res. 48 (W1), W455-W462–W462. doi:10.1093/nar/gkaa390

PubMed Abstract | CrossRef Full Text | Google Scholar

Norinder, U., and Bergström, C. A. S. (2006). Prediction of ADMET properties. ChemMedChem 1 (9), 920–937. PMID:16952133. doi:10.1002/cmdc.200600155

PubMed Abstract | CrossRef Full Text | Google Scholar

Pall, R., Gauthier, Y., Auer, S., and Mowaswes, W. (2023). Predicting drug shortages using pharmacy data and machine learning. Health Care Manag. Sci. 26 (3), 395–411. doi:10.1007/s10729-022-09627-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Paul, D., Sanap, G., Shenoy, S., Kalyane, D., Kalia, K., and Tekade, R. K. (2021). Artificial intelligence in drug discovery and development. Drug Discov. Today 26 (1), 80–93. PMID:33099022. doi:10.1016/j.drudis.2020.10.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Price, W. N. (2017). Regulating black-box medicine. Mich Law Rev. 116 (3), 421–474. PMID:29240330. doi:10.36644/mlr.116.3.regulating

PubMed Abstract | CrossRef Full Text | Google Scholar

Pun, F. W., Pulous, F., and Zhavoronkov, A. (2024). Generative artificial intelligence for drug discovery: how the first AI-discovered and AI-designed drug progressed to phase 2 clinical testing. Res. Communities Springer Nat. Available at: https://communities.springernature.com/posts/draft-9b0556ab-aa4d-49fd-bf3c-9c1a621defff [Accessed July 9, 2024].

Google Scholar

Rantanen, J., and Khinast, J. (2015). The future of pharmaceutical manufacturing sciences. J. Pharm. Sci. 104 (11), 3612–3638. doi:10.1002/jps.24594

PubMed Abstract | CrossRef Full Text | Google Scholar

Regulatorische (2024). Regulatorische Nutzung KI-gestützter Methoden zur effizienten Bewertung und Regulation biomedizinischen Arzneimitteln (KIMERBA). Available at: https://www.bundesgesundheitsministerium.de/ministerium/ressortforschung/handlungsfelder/digitalisierung/kimerba (Accessed March 2, 2024).

Google Scholar

Ren, F., Aliper, A., Chen, J., Zhao, H., Rao, S., Kuppe, C., et al. (2024). A small-molecule TNIK inhibitor targets fibrosis in preclinical and clinical models. Nat. Biotechnol. 8. doi:10.1038/s41587-024-02143-0

CrossRef Full Text | Google Scholar

Routray, R., Tetarenko, N., Abu-Assal, C., Mockute, R., Assuncao, B., Chen, H., et al. (2020). Application of augmented intelligence for pharmacovigilance case seriousness determination. Drug Saf. N. Z. 43 (1), 57–66. PMID:31605285. doi:10.1007/s40264-019-00869-4

CrossRef Full Text | Google Scholar

Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1 (5), 206–215. doi:10.1038/s42256-019-0048-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Samala, R. K., Chan, H., Hadjiiski, L., Helvie, M. A., Wei, J., and Cha, K. (2016). Mass detection in digital breast tomosynthesis: deep convolutional neural network with transfer learning from mammography. Med. Phys. 43 (12), 6654–6666. doi:10.1118/1.4967345

PubMed Abstract | CrossRef Full Text | Google Scholar

Schmider, J., Kumar, K., LaForest, C., Swankoski, B., Naim, K., and Caubel, P. M. (2019). Innovation in pharmacovigilance: use of artificial intelligence in adverse event case processing. Clin. Pharmacol. Ther. U. S. 105 (4), 954–961. PMID:30303528. doi:10.1002/cpt.1255

CrossRef Full Text | Google Scholar

Schneider, P., Walters, W. P., Plowright, A. T., Sieroka, N., Listgarten, J., Goodnow, R. A., et al. (2020). Rethinking drug design in the artificial intelligence era. Nat. Rev. Drug Discov. 19 (5), 353–364. PMID:31801986. doi:10.1038/s41573-019-0050-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Sharma, A., Virmani, T., Pathak, V., Sharma, A., Pathak, K., Kumar, G., et al. (2022). Artificial intelligence-based data-driven strategy to accelerate research, development, and clinical trials of COVID vaccine. Biomed. Res. Int. U. S. 2022, 7205241. PMID:35845955. doi:10.1155/2022/7205241

CrossRef Full Text | Google Scholar

Shelmerdine, S. C., Togher, D., Rickaby, S., and Dean, G. (2024). Artificial intelligence (AI) implementation within the national health service (NHS): the south west London AI working group experience. Clin. Radiol. doi:10.1016/j.crad.2024.05.018

CrossRef Full Text | Google Scholar

Takayama, K., Fujikawa, M., and Nagai, T. (1999). Artificial neural network as a novel method to optimize pharmaceutical formulations. Pharm. Res. 16 (1), 1–6. doi:10.1023/A:1011986823850

PubMed Abstract | CrossRef Full Text | Google Scholar

Transparency (2024). Transparency. USA: European Medicines Agency. Available at: https://www.ema.europa.eu/en/about-us/how-we-work/transparency (Accessed May 22, 2024).

Google Scholar

Vamathevan, J., Clark, D., Czodrowski, P., Dunham, I., Ferran, E., Lee, G., et al. (2019). Applications of machine learning in drug discovery and development. Nat. Rev. Drug Discov. 18 (6), 463–477. PMID:30976107. doi:10.1038/s41573-019-0024-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Vellido, A. (2019). Societal issues concerning the application of artificial intelligence in medicine. Kidney Dis. 5 (1), 11–17. doi:10.1159/000492428

CrossRef Full Text | Google Scholar

Vora, L. K., Gholap, A. D., Jetha, K., Thakur, R. R. S., Solanki, H. K., and Chavda, V. P. (2023). Artificial intelligence in pharmaceutical Technology and drug delivery design. Pharmaceutics 15 (7), 1916. doi:10.3390/pharmaceutics15071916

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, L., Ding, J., Pan, L., Cao, D., Jiang, H., and Ding, X. (2019). Artificial intelligence facilitates drug design in the big data era. Chemom. Intell. Lab. Syst. 194, 103850. doi:10.1016/j.chemolab.2019.103850

CrossRef Full Text | Google Scholar

Wang, Y., Zhang, T., Guo, X., and Shen, Z. (2024). Gradient based feature attribution in explainable AI: a technical review. arXiv. doi:10.48550/ARXIV.2403.10415

CrossRef Full Text | Google Scholar

Wu, L., Huang, R., Tetko, I. V., Xia, Z., Xu, J., and Tong, W. (2021). Trade-off predictivity and explainability for machine-learning powered predictive toxicology: an in-depth investigation with Tox21 data sets. Chem. Res. Toxicol. 34 (2), 541–549. PMID:33513003. doi:10.1021/acs.chemrestox.0c00373

PubMed Abstract | CrossRef Full Text | Google Scholar

Wu, L., Yan, B., Han, J., Li, R., Xiao, J., He, S., et al. (2023). TOXRIC: a comprehensive database of toxicological data and benchmarks. Nucleic Acids Res. 51 (D1), D1432–D1445. doi:10.1093/nar/gkac1074

PubMed Abstract | CrossRef Full Text | Google Scholar

Xu, J., Qi, H., Wang, Z., Wang, L., Steurer, B., Cai, X., et al. (2024). Discovery of a novel and potent cyclin-dependent kinase 8/19 (CDK8/19) inhibitor for the treatment of cancer. J. Med. Chem. 67 (10), 8161–8171. doi:10.1021/acs.jmedchem.4c00248

PubMed Abstract | CrossRef Full Text | Google Scholar

Yu, K.-H., Beam, A. L., and Kohane, I. S. (2018). Artificial intelligence in healthcare. Nat. Biomed. Eng. 2 (10), 719–731. PMID:31015651. doi:10.1038/s41551-018-0305-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Zong, N., Wen, A., Moon, S., Fu, S., Wang, L., Zhao, Y., et al. (2022). Computational drug repurposing based on electronic health records: a scoping review. Npj Digit. Med. 5 (1), 77. doi:10.1038/s41746-022-00617-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: artificial intelligence, health policy, regulatory science, drug lifecycle, drug approval process, patient safety

Citation: Oualikene-Gonin W, Jaulent M-C, Thierry J-P, Oliveira-Martins S, Belgodère L, Maison P, Ankri J and The Scientific Advisory Board of ANSM (2024) Artificial intelligence integration in the drug lifecycle and in regulatory science: policy implications, challenges and opportunities. Front. Pharmacol. 15:1437167. doi: 10.3389/fphar.2024.1437167

Received: 23 May 2024; Accepted: 18 July 2024;
Published: 02 August 2024.

Edited by:

Christian Agyare, Kwame Nkrumah University of Science and Technology, Ghana

Reviewed by:

Srijit Seal, Broad Institute, United States
Obed Brew, University of West London, United Kingdom

Copyright © 2024 Oualikene-Gonin, Jaulent, Thierry, Oliveira-Martins, Belgodère, Maison, Ankri and The Scientific Advisory Board of ANSM. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Wahiba Oualikene-Gonin, wahiba.oualikene-gonin@ansm.sante.fr

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.