
94% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
EDITORIAL article
Front. Drug Saf. Regul.
Sec. Advanced Methods in Pharmacovigilance and Pharmacoepidemiology
Volume 5 - 2025 | doi: 10.3389/fdsfr.2025.1579171
This article is part of the Research Topic External Control Arms for Single-Arm Studies: Methodological Considerations and Applications View all 6 articles
The final, formatted version of the article will be published soon.
You have multiple emails registered with Frontiers:
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
This special issue Topic includes five articles that highlight recent methodological developments in EC studies that aim to enhance transparency and improve the robustness of EC studies.As the use of ECs expands, clear and consistent terminology is essential to ensure methodological transparency and regulatory acceptance. Rippin et al (2024) propose a standardized nomenclature framework for studies comparing clinical trial intervention arms with external data. They emphasize that terminology should reflect the observational nature of EC studies than suggest an equivalence to RCT control arms. They caution against the term Externally Controlled Trial unless the EC was pre-specified in the protocol. Instead, they advocate for terms such as External Comparator Cohort (ECC) or External Comparator (EC) when data collection for the EC was not planned before the trial initiation. External patient data used as a comparator differ fundamentally from RCT control arms, as they are drawn from separate populations and often rely on different data collection methods. Also, calling external comparator populations an 'arm' falsely suggests that the external data are directly connected to interventional trial data. However, the authors advocate that the term 'study' is appropriate since an ECC is a new study being conducted outside the original trial protocol and applies observational study research methods. By adopting precise terminology, researchers and regulators can minimize unrealistic expectations and better align EC studies with observational research principles.Building upon the need for clearer definitions, Rippin (2024) explores the role of estimands in defining treatment effects in EC studies. This perspective article discusses the estimand framework outlined in the International Council for Harmonisation (ICH) E9 (R1) addendum that can be adapted for use in EC studies. This framework describes five estimand attributes (treatment conditions, population, endpoints, handling of intercurrent events and population-level summary). The author suggests additional considerations for the estimand specific to EC studies, which include the baseline definition, the marginal estimator and completeness of data. These attributes are particularly important in EC studies, where data inconsistencies and potential biases require refined methodologies.A major challenge in EC studies is ensuring methodological rigor to generate regulatorygrade evidence. Arnold et al (2024) propose target trial emulation (TTE) as a framework to improve the design and analysis of EC studies. The TTE framework involves specifying the ideal target trial protocol and then emulating its key elements-such as eligibility criteria, treatment strategies, and outcome definitions-using RWD. This approach enhances comparability between EC and trial cohorts by mimicking RCT design. However, there are some limitations; for example, the effectiveness of the TTE framework relies heavily on the quality and completeness of RWD such that incomplete or inaccurate data can lead to biased results. Unmeasured confounding makes it difficult to draw definitive causal inferences, and implementing the TTE framework can be complex and resource-intensive, requiring detailed knowledge of both the target trial design and the available RWD. Despite these limitations, the transparency and structured principles of TTE can strengthen the confidence of the regulatory bodies like the FDA, European Medicines Agency (EMA), and Medicines and Healthcare products Regulatory Agency (MHRA) in EC studies.Hybrid natural history studies (NHS), which combine retrospective and prospective data collection, are emerging as a valuable tool in rare disease research, particularly for developing ECAs in clinical trials. Ugoji et al (2024) outlines the key design and analysis considerations that are relevant to hybrid designs used as external comparators, including that these designs must be applicable to or feasible for the target disease and may require additional design and operational considerations to those needed for a standalone retrospective or prospective external comparator. Given the limited information available in the literature on these designs and the potential for hybrid designs as external comparators in regulatory submissions, these methodological recommendations offer a valuable roadmap for future use of this design, particularly for rare indications.A critical challenge in EC studies is bias related to misclassification Ackerman et al (2024) explore the impact of measurement error on real-world oncology endpoints, particularly progression-free survival (PFS), focusing on how differences in the assessment methods and timing in real-world data (RWD) limit the comparability of realworld PFS to trial PFS. Two primary sources of bias are identified: misclassification bias and surveillance bias.Misclassification bias occurs when progression events are incorrectly categorized as false positives or false negatives, distorting PFS estimates. Surveillance bias, stemming from irregular assessment schedules in RWD compared to strict schedules of clinical trials, has a minimal impact on its own but becomes more pronounced when combined with misclassification errors. The findings emphasize that even when trial-derived and real-world PFS estimates appear comparable, underlying biases may persist at the individual level due to incomplete or inconsistent data capture in RWD. To mitigate these biases, the authors recommend improving algorithms for deriving endpoints, conducting simulations to quantify biases, and contextualizing results to account for measurement errors.The collection of articles in this special issue underscores the evolving role of external comparator studies and the ongoing methodological advancements required to enhance their credibility in regulatory and payer decision-making. By addressing key challenges such as terminology standardization, target trial emulation, hybrid natural history study approaches, and bias mitigation, these contributions provide a strong foundation for future research. As external comparators gain wider acceptance among regulatory agencies and HTA bodies, particularly in oncology and rare diseases, ensuring methodological rigor, data quality, and transparency will be crucial for their continued credibility. The sustained collaboration between regulators, industry, and researchers will be essential to refine best practices and establish external comparators as a trusted tool for evidence generation, ultimately accelerating access to innovative therapies while maintaining high standards of scientific validity and regulatory confidence.
Keywords: External comparator, Methodological innovation, target trial emulation, Standardized nomenclature, misclassification bias
Received: 18 Feb 2025; Accepted: 20 Feb 2025.
Copyright: © 2025 Layton, Hester and Golozar. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Deborah Layton, Lane, Clarke and Peacock (LCP) LLP, London, United Kingdom
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.