Implications of Causality in Artificial Intelligence
GENERAL COMMENTARY article
Why Causal AI Is Easier Said Than Done
Provisionally accepted- Simon Fraser University, Burnaby, Canada
Luís Cavique's (2024) article, "Implications of Causality in Artificial Intelligence," presents a compelling case for the importance of causalAI. By focusing on cause-and-effect relationships rather than mere correlations, causalAI offers a pathway to more transparent, fair, and reliable AI systems. Cavique argues that causalAI is the least criticized approach compared to responsible AI, fair AI, and explainable AI, largely due to its scientific rigor and potential to reduce biases. However, despite its promise, causalAI is not without challenges. This commentary aims to assess some of these limitations and potential criticisms of causalAI as presented by Cavique, arguing that while it holds substantial promise, its implementation and practical application may be more complex and fraught with difficulties than the author suggests. One of the primary challenges with causalAI lies in its complexity. CausalAI requires a deep understanding of causal inference and advanced statistical techniques, making it less accessible to most AI developers (Cox Jr., 2023). Unlike correlation-based methods, which are widely understood and now relatively easy to implement, causal models demand a high level of expertise. Arguably, only a select group of experts can effectively design, implement, and interpret these models. This complexity can create barriers to entry for many organizations and individuals who might want to engage in developing or using causalAI for benefiting from the transparency and fairness that causalAI promises. This could exacerbate existing disparities in AI literacy, and capacitation, and epistemic justice, potentially leading to an increased form of AI elitism, where only those with advanced skills, knowledge, and wealth of resources can fully participate in or critique causalAI development. This situation could undermine the broader goal of making its benefits accessible to a wide audience. CausalAI's reliance on high-quality, detailed data presents another significant challenge. Establishing causal relationships requires data that not only captures correlations but also provides the context needed to infer causality (Vallverdú, 2024). In many real-world applications, such data is either unavailable or prohibitively expensive to obtain. Causal AI requires high-quality data that captures both correlations and context (Vallverdú, 2024). In practice, such data is often scarce or costly, posing challenges for establishing accurate causal relationships Additionally, even when data is available, it may be incomplete or biased in ways that could skew causal inferences. The assumptions underlying causal models also warrant critical examination. CausalAI models often assume that all relevant variables have been identified and correctly measured. However, in practice, unmeasured confounders-variables that influence both the cause and effect-can distort causal estimates, leading to incorrect conclusions and as Rawal et al (2024) put it there is a lack of ground truth for validation. This reliance on potentially faulty assumptions could result in AI systems that, while appearing transparent and fair, are actually based on flawed reasoning. Furthermore, the process of identifying and validating causal relationships can be resource intensive and time-consuming. This raises questions about the scalability of causalAI, particularly in dynamic environments where data is constantly evolving, and causal relationships may shift over time. The effort required to maintain accurate causal models could outweigh the benefits, especially in fast-paced industries where quick decision-making is critical. Scalability is a major challenge for causal AI, as building and validating models is complex and resource-intensive. These models often require tailored adjustments for new contexts, limiting their generalizability compared to correlation-based methods. Scalability is a crucial consideration in the deployment of AI systems, and causalAI may struggle in this area. The process of building and validating causal models is not only complex, but also resource-intensive. As Cavique rightly notes, causalAI requires meticulous identification of causal variables and relationships, which may not easily generalize across different contexts or applications, particularly in sectors requiring a major data curation effort (such as the healthcare sector). This limitation could hinder the practical application of causalAI in scenarios where scalability and adaptability are key. Specificity required by causal models may limit their ability to generalize across different datasets or environments. While correlation-based models can often be applied broadly with minimal adjustments, causal models may need to be tailored to the particularities of each new situation. This lack of generalizability could make causalAI less appealing in settings where adaptability is needed. CausalAI is lauded for its potential to improve fairness and transparency in AI systems, but these benefits are not guaranteed. The causal relationships identified by AI systems are not immune to the biases present in the underlying data. If the data reflects existing societal biases or power dynamics, the causal models derived from it may inadvertently reinforce these issues. Put more Even when accurately identifying cause-and-effect relationships, they may perpetuate societal biases, potentially reinforcing inequities if not designed inclusively.simply, a causal model trained on biased data might correctly identify a causal relationship but still perpetuate unjust outcomes. Moreover, the iInterpretation of causal models can be influenced by the subjective perspectives of those designing or using them (Mittelstadt et al., 2019)-especially if the design of CausalAI is not inclusive and transparent, allowing for the active participation of stakeholders. This subjectivity introduces another layer of potential bias, as different stakeholders may have different interpretations of what constitutes a fair or just causal relationship. Ensuring that causalAI models are both fair and transparent requires careful consideration of these ethical and interpretive challenges, which are not easily addressed through technical solutions alone (Bélisle-Pipon et al, 2021). The practical implementation of causalAI also raises significant concerns. Implementing causal AI often requires significant changes to workflows and data processes, leading to time and cost barriers. This can deter organizations, especially if benefits are not evident upfront. Integrating causal models into existing AI systems may require substantial changes to workflows, data pipelines, and decision-making processes. These changes come with associated costs, both in terms of time and resources. Organizations may be reluctant to adopt causalAI if the benefits are not immediately clear or if the costs outweigh the perceived advantages. Furthermore, the transition to causalAI could disrupt existing AI practices and lead to resistance from stakeholders who are comfortable with current methods. The need for specialized knowledge and expertise to implement and maintain causal models may further exacerbate these challenges, making the adoption of causalAI more difficult in practice than in theory. Finally, the focus on causality may introduces the increased risks of unintended (and undetected) consequences. Causal AI, if based on flawed models or data, can lead to unintended negative outcomes. Adjustments to causal variables might have unforeseen side effects, particularly in lessdocumented contexts or marginalized populations, leading to data-driven biases (Norori et al, 2019). While CausalAI aims to identify and leverage causal relationships to improve decisionmaking, if the underlying model is wrong, interventions based on these models could have unforeseen effects. For instance, intervening on a causal variable to achieve a desired outcome might inadvertently lead to negative side effects in other areas. These unintended consequences highlight the need for a cautious and nuanced approach to applying CausalAI in practice. Beyond that, CausalAI does not address the underlying issues of fairness, representation and power imbalance-because it's causal from a data and AI point of view does not mean it is true and representative of reality. So even a CausalAI that is capable of grasp cause-and-effect relationships will be wrong in relation to non-or under-documented realities. An important example of this is about rarer phenomena and especially marginalized populations, which will not be better represented by CausalAI, nor better understood or more fully taken into consideration. While Cavique's advocacy for causalAI is well-founded and highlights the approach's potential to address critical issues in AI, but significant challenges accompany this paradigm. The complexity, data requirements, scalability issues, and ethical considerations all pose substantial obstacles to the widespread adoption of causalAI. Moreover, the practical implementation of these models may involve significant costs and risks, which could limit their appeal. CausalAI represents an exciting and promising direction, but caution needs to be exercised and its potential benefits must be carefully weighed against the challenges it presents. In this context, further research into causal AI becomes not only desirable but essential. By shifting the focus from mere correlations to understanding why certain relationships exist, causal AI offers a promising path toward more robust, adaptable, and transparent AI systems. As pointed out by Cavique, Pearl and Mackenzie's (2018) approach, emphasizing the need for a framework that captures the underlying mechanisms of data, could be key to advancing AI beyond its current capabilities. While the complexity of causal inference presents challenges, such as the need for specialized expertise and high-quality data, the long-term potential of these methods suggests that they could redefine how we understand and achieve intelligence in AI systems. Moving forward, a focus on addressing the complexities, data requirements, and ethical considerations outlined will be crucial for realizing the full potential of causal AI in advancing the field beyond the limitations of correlation-based models.
Keywords: Causal AI, AI ethics, Unintended consequence, Scalability, Ethical issue, Social implication
Received: 29 Aug 2024; Accepted: 16 Dec 2024.
Copyright: © 2024 Bélisle-Pipon. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Jean-Christophe BĂ©lisle-Pipon, Simon Fraser University, Burnaby, Canada
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.