- 1Centre of Excellence for Biosecurity Risk Analysis, University of Melbourne, Parkville, VIC, Australia
- 2Delft Institute of Applied Mathematics, Delft University of Technology, Delft, Netherlands
Editorial on the Research Topic
Multivariate Probabilistic Modelling for Risk and Decision Analysis
We argue that any process (e.g., risk analysis) that informs decision making under uncertainty should be objective and scientific. That implies the need of a formalized methodology that is accountable, transparent, and repeatable. The beginnings of decision making under uncertainty can be traced back to the 1931 when the philosopher mathematician F. Ramsey published a piece written in 1926. In this piece, Ramsey [1] gives a rational reconstruction of vague concepts such as partial belief and value and shows how they can be operationalized and measured. Most importantly it argues that these notions must be operationalized together by observing a person's choice behavior under uncertainty. Partial belief is shown to obey Kolmogorov's axioms for (finitely additive) probability. The value, a.k.a. the utility part of decision theory was re-discovered in Von Neumann and Morgenstern [2], and Savage [3] generalized the theory and supplied an axiomatic foundation. The choice behavior of an individual satisfying these axioms can be uniquely represented as expected utility whereby the subjective probability (representing the partial belief) is unique and the utility is positive affine unique.
These foundations (based on probability theory and expected utility) remain at the core of rational decision making because they are simple and adequate for applications. However, the project of rationalizing decisions under uncertainty encompasses much more and is discussed in various references among which [4–9]. Most of these references come with toy and real life examples and their solutions using various decision analysis techniques and software.
The close relationship and interlaced development of risk analysis and decision theory are discussed in Bier [10]. The references therein cover recent applications as well as methodological advances. However the only essential element in grasping probabilistic risk and decision theory is basic probabilistic and statistical knowledge.
Modeling uncertainty often requires the assessment of multiple, dependent uncertain quantities of interest. In addition to univariate distributions, inter-dependencies between these quantities or variables need to be modeled to properly understand a potential overall risk or optimal decision.
For an overview of probabilistic dependence models we recommend [11, 12]. The latter book is accompanied by software and code in R. When data are available, statistical techniques can be used to estimate the required parameters of probabilistic models. In absence of data, expert judgment is indispensable. The scientific collection, treatment and use of expert judgment are thoroughly discussed in Dias et al. [13] and Hanea et al. [14]. The two references also abound in applications.
Various probabilistic dependence models could be used to represent multivariate distributions, with the most advantageous models being flexible, yet statistically-robust. Striking an efficient balance between satisfying model complexity and ease of development requires continuous compromise.
The process of building and quantifying such models with experts, stakeholders, and analysts follows a series of clear steps: building a conceptual model, identifying parameters, formulating data requirements and addressing data gaps. Although a clear distinction between the qualitative and quantitative steps may seem tempting, the modeling process needs to then be treated as a whole [15].
The current Research Topic touches on a few of the above subjects in five articles (discussed further), one of which is setting the scene of building and quantifying much needed probabilistic models and offering an overview of the existing literature. A couple of articles treat topics in expert elicited parameters, while the other two present the challenges of building dependence models. All articles present the methodology on examples and applications.
In the methods articles from the current Research Topic, Burgman et al. give an overview of the available methods for model building and quantification. The authors advise a participatory (and iterative) approach to model development. However, they argue that best practice advice cannot be proposed based on the (fairly fractured) existing literature.
Often, after a consensus model is built, prior to full quantification, data gaps are identified. When these gaps cannot be filled with experimental data, expert data is elicited, aggregated, validated and used instead. Using structured protocols for such elicitations is imperative. Building on previous research and community understanding of what a structured expert elicitation protocol may mean, when quantitative elicitations are concerned, a few elements were identified as essential [16]. These are: (1) asking questions that have clear operational meanings; (2) following transparent methodological rules, such that the process is traceable and repeatable; (3) mitigating psychological and motivational biases; (4) thoroughly documenting the process; and (5) providing opportunities for empirical evaluation and validation.
The last element justifies the use of calibration (a.k.a. seed) variables, providing an empirical basis for validating expert' judgments. By using calibration questions, experts' performance can be evaluated in terms of various performance measures, such as calibration, accuracy and informativeness. Using the same measures, any combination of experts' assessments one chooses to use as a final aggregated answer can be likewise evaluated.
However, some of these performance measures are sensitive to the type and number of calibration questions answered. Often experts answer few, or different subsets of questions and the stability and reliability of existing performance measures is consequently questioned. Dharmarathne et al. proposes a model that (theoretically) improves the calculation of the Brier score, an accuracy measure often used to evaluate experts assessments of event probabilities. The proposed method is still in its infancy and it was only evaluated on synthetic data and one real dataset, where experts evaluated the probabilities of geo-political events.
Eliciting bi-variate continuous distributions from experts often involves eliciting marginal distributions and their dependence via conditional probabilities. This is also the method employed by Werner et al. in the context of modeling antibiotic resistance.
Building a multivariate distribution from marginal distributions and dependence information relies on copulas [17], whose properties are discussed in Werner and Cai et al.
Werner proposes a novel framework to model and assess the risk entailed by tail dependencies between service times in Discrete Event Simulation through minimally informative copulas. A linear-programming based method to assess minimally informative copulas is proposed when dependence is described through conditional probabilities elicited from experts.
Finally, Cai et al. proposes a copula-based approach to estimate the dependence among lumber strength properties. A graphical method with simulated data from the fitted copula model is used to understand the sources of damage to the lumber specimens.
This Research Topic focuses on multivariate dependence modeling in the context of risk and decision analysis concerning both data-driven and expert-based research directions. The topic's articles focus on modeling, validation, and the important connection between available methods and the practical aspects with which practitioners are confronted. The applications show the potential of dependence modeling in diverse settings. Finally, the articles also point out to a variety of research directions, and the long exploratory road ahead, especially for expert-based dependence modeling.
Author Contributions
Both authors drafted, reviewed, and approved the text for publication.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's Note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Acknowledgments
We would like to thank all authors for their contributions and all reviewers for their time and insightful comments.
References
1. Ramsey FP. Truth and Probability. In: Ramsey, 1931. London: The Foundations of Mathematics and other Logical Essays (1926) p. Ch. VII, p. 156–98.
2. Von Neumann J, Morgenstern O. Theory of Games and Economic Behavior. Princeton, NJ: Princeton University Press (1944).
4. French S, Hartley R, Thomas LC, White DJ editors. Multi-Objective Decision Making. London: Academic Press (1983).
7. Bedford T, Cooke R. Probabilistic Risk Analysis: Foundations and Methods. Cambridge: Cambridge University Press (2001).
8. Clemen RT, Reilly T. Making Hard Decisions With Decision Tools. Duxbury, MA; Pacific Grove, CA (2001).
9. Edwards W, Miles RF, von Winterfeldt D editors. Advances in Decision Analysis: From Foundations to Applications. Cambridge, UK: Cambridge University Press (2007).
10. Bier V. The role of decision analysis in risk analysis: a retrospective. Risk Anal. (2020) 40:2207–17. doi: 10.1111/risa.13583
13. Dias LC, Morton A, Quigley J editors. Elicitation: The Science and Art of Structuring Judgement. Cham: International Series in Operations Research & Management Science; Springer (2018).
14. Hanea AM, Nane GF, Bedford T, French S editors. Expert Judgment in Risk and Decision Analysis. Cham: International Series in Operations Research & Management Science; Springer (2021).
15. French S. From soft to hard elicitation. J Operat Res Soc. (2021). doi: 10.1080/01605682.2021.1907244. [Epub ahead of print].
16. Hanea A, McBride M, Burgman M, Wintle B. The value of discussion and performance weights in aggregated expert judgements. Risk Analysis. (2018) 38:1781–94. doi: 10.1111/risa.12992
Keywords: multivariate probabilistic modeling, structured expert judgment, decision analysis, risk analysis, dependence modeling
Citation: Hanea AM and Nane GF (2022) Editorial: Multivariate Probabilistic Modelling for Risk and Decision Analysis. Front. Appl. Math. Stat. 8:829729. doi: 10.3389/fams.2022.829729
Received: 06 December 2021; Accepted: 27 January 2022;
Published: 22 February 2022.
Edited and reviewed by:
Daniel Potts, Chemnitz University of Technology, GermanyCopyright © 2022 Hanea and Nane. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Anca Maria Hanea, YW5jYS5tYXJpYS5oYW5lYSYjeDAwMDQwO2dtYWlsLmNvbQ==