AUTHOR=Hegerl Gabriele C. , Ballinger Andrew P. , Booth Ben B. B. , Borchert Leonard F. , Brunner Lukas , Donat Markus G. , Doblas-Reyes Francisco J. , Harris Glen R. , Lowe Jason , Mahmood Rashed , Mignot Juliette , Murphy James M. , Swingedouw Didier , Weisheimer Antje TITLE=Toward Consistent Observational Constraints in Climate Predictions and Projections JOURNAL=Frontiers in Climate VOLUME=3 YEAR=2021 URL=https://www.frontiersin.org/journals/climate/articles/10.3389/fclim.2021.678109 DOI=10.3389/fclim.2021.678109 ISSN=2624-9553 ABSTRACT=

Observations facilitate model evaluation and provide constraints that are relevant to future predictions and projections. Constraints for uninitialized projections are generally based on model performance in simulating climatology and climate change. For initialized predictions, skill scores over the hindcast period provide insight into the relative performance of models, and the value of initialization as compared to projections. Predictions and projections combined can, in principle, provide seamless decadal to multi-decadal climate information. For that, though, the role of observations in skill estimates and constraints needs to be understood in order to use both consistently across the prediction and projection time horizons. This paper discusses the challenges in doing so, illustrated by examples of state-of-the-art methods for predicting and projecting changes in European climate. It discusses constraints across prediction and projection methods, their interpretation, and the metrics that drive them such as process accuracy, accurate trends or high signal-to-noise ratio. We also discuss the potential to combine constraints to arrive at more reliable climate prediction systems from years to decades. To illustrate constraints on projections, we discuss their use in the UK's climate prediction system UKCP18, the case of model performance weights obtained from the Climate model Weighting by Independence and Performance (ClimWIP) method, and the estimated magnitude of the forced signal in observations from detection and attribution. For initialized predictions, skill scores are used to evaluate which models perform well, what might contribute to this performance, and how skill may vary over time. Skill estimates also vary with different phases of climate variability and climatic conditions, and are influenced by the presence of external forcing. This complicates the systematic use of observational constraints. Furthermore, we illustrate that sub-selecting simulations from large ensembles based on reproduction of the observed evolution of climate variations is a good testbed for combining projections and predictions. Finally, the methods described in this paper potentially add value to projections and predictions for users, but must be used with caution.