Skip to main content

BRIEF RESEARCH REPORT article

Front. Artif. Intell., 08 December 2023
Sec. AI in Food, Agriculture and Water

Safer not to know? Shaping liability law and policy to incentivize adoption of predictive AI technologies in the food system

\r\nCarrie S. Alexander
&#x;Carrie S. Alexander1*Aaron SmithAaron Smith2Renata Ivanek&#x;Renata Ivanek3
  • 1Socioeconomics and Ethics, Artificial Intelligence in the Food System (AIFS), University of California, Davis, Davis, CA, United States
  • 2Agricultural and Resource Economics, University of California, Davis, Davis, CA, United States
  • 3Department of Population Medicine and Diagnostic Sciences, Cornell University, Ithaca, NY, United States

Governments, researchers, and developers emphasize creating “trustworthy AI,” defined as AI that prevents bias, ensures data privacy, and generates reliable results that perform as expected. However, in some cases problems arise not when AI is not trustworthy, technologically, but when it is. This article focuses on such problems in the food system. AI technologies facilitate the generation of masses of data that may illuminate existing food-safety and employee-safety risks. These systems may collect incidental data that could be used, or may be designed specifically, to assess and manage risks. The predictions and knowledge generated by these data and technologies may increase company liability and expense, and discourage adoption of these predictive technologies. Such problems may extend beyond the food system to other industries. Based on interviews and literature, this article discusses vulnerabilities to liability and obstacles to technology adoption that arise, arguing that “trustworthy AI” cannot be achieved through technology alone, but requires social, cultural, political, as well as technical cooperation. Implications for law and further research are also discussed.

Introduction

AI technologies have become useful and highly relevant in the food system, for example, in leveraging federated learning to combat food fraud in food supply chains (Gavai et al., 2023) and adaptation for more efficient production and manufacturing processes (Misra et al., 2022; Konur et al., 2023). Many researchers, commentators, and government agencies are also interested in reining in the development and use of AI by companies who may exploit it for profit at the expense of public welfare. Much of the research and discussion focuses on AI needing to be developed and implemented in a manner that is “trustworthy” or “responsible” (Danks, 2019; Ryan, 2020; Braun et al., 2021; Mökander and Floridi, 2021; McGovern et al., 2022). These ethical considerations for the use of AI, while very important, make virtually no mention of the scenario presented here in which AI might be avoided at the public's expense. Decisions not to adopt an AI technology may at times be strategic, or may be due to a lack of sufficient resources, or both. There may also be many other complex variables in play as a company attempts to stay up to date with best practices while not over-extending itself with new capabilities it may not be able to handle. This article argues that “trustworthy AI” is a package deal that requires social, cultural, political, as well as technical cooperation.

Consider the following scenario for a firm in the food industry, such as a farmer, packer, processor, or retailer. Similar scenarios could arise outside the food system. The firm is deciding whether to adopt a new AI-based technology that will produce high-frequency granular information on the potential, expressed as risk measuring the probability and severity of adverse effects (Lowrance, 1976; Potter, 1996), for disease or pathogens to adversely affect food products or workers. Example technologies include a computer vision system designed to detect weeds for robotic weeding or ripe fruit for harvesting but which also could reveal fecal contamination that could indicate high risk of fecal pathogen presence; automatic detection of pathogens that may enter food products in processing plants; detection or prediction of contagious disease in food system workers or livestock. Upon receiving risk information, the firm may be able to mitigate risk or prevent an outbreak by taking potentially expensive actions such as ceasing production or recalling products. If the firm does not mitigate based on this information, then at a later date it may be held liable for damages. If it does not adopt the technology, and therefore never receives knowledge the technology might provide, then it may be less liable because it did not have the information required to prevent or mitigate the damage. The AI technology may also improve productivity. The firm will compare the mitigation and liability costs between the existing and new technology. If the new technology increases these costs by more than the value of any productivity gains from the new technology, then an economically rational firm will not adopt the technology.

What is of concern in this article is the recognition that legal experts may reasonably and pragmatically advise their business clients that taking on a powerful new algorithm that suddenly provides them with better data on when or where certain risks may occur may expose them to greater liability risk. This issue is similar to that noted by Wagner (1997) regarding knowledge and liability for toxicity. However, there are more potential costs than just those related to testing to acquire knowledge of risks and litigation regarding harms. An intermediate step, as this article argues, is that as a result of increased knowledge and the increased liability such knowledge may cause, companies may also need to make alterations to their products and processes to alleviate the risks. Such changes may be small in scope, or may involve significant and costly changes. This article will review findings from interviews, compare and contextualize these findings within law and economics literatures, and suggest possibilities for helping companies manage risks so that companies, workers, and the public can benefit from these new knowledge tools.

Materials and methods

Confidential semi-structured interviews were conducted beginning in 2021 as part of on-going research funded by the AI Institute for Next Generation Food Systems (AIFS), an NSF/NIFA funded institute for AI food system technology research (Alexander et al., 2023). Interviews and one focus group have been conducted by C.A. To date, interviews have involved 66 researchers and stakeholders from all areas of the food system as part of two on-going projects, along with several surveys. The starting point and methodological foundations for this work came from bioethics research on transforming organizational culture from a culture of compliance to a culture of trustworthiness in the development and use of technologies that require and are accountable to the public trust (Yarborough et al., 2009). This research explores the perspectives of academic researchers in AI-related work as well as the views of stakeholders through a highly interdisciplinary mixed-methods approach including surveys, interviews, focus groups, quantitative analysis, and narrative analysis and inquiry (Walsham, 1995; Bøllingtoft, 2007; Yanow and Schwartz-Shea, 2009; Ybema et al., 2009; Hesse-Biber and Leavy, 2010; Schwartz-Shea and Yanow, 2011; Worline, 2012; James, 2017). The main purpose of these methods was/is to explore what “trustworthy” or “responsible” AI means to those creating or using it, and how food system stakeholders make decisions regarding whether AI technologies are trustworthy, reliable, or relevant enough to adopt them.

All efforts have been made to create an atmosphere of respect and safety where genuine engagement and reflection can flourish. Ethics review by UC Davis IRB determined that this research does not fall under human subjects research (IRB ID: 1709437-1, FWA No: 00004557, dated January 29, 2021, amended and reconfirmed, May 5, 2022). However, all possible measures have been taken to ensure the privacy and confidentiality of those being asked to participate. All AIFS researchers involved in the institute in 2021 were invited to participate, of whom 75% responded and agreed to be interviewed. Additional interviews were conducted through snowball sampling (Parker et al., 2019). Most interviews were conducted on Zoom, and most were recorded; all study participants consented to participation. All participants were advised that any recordings and transcripts would be kept confidential and anonymous in order to help participants feel safe responding candidly regarding sensitive ethical issues.

Due to the highly sensitive nature of the topic, demographic data that might compromise privacy were foreclosed from the outset, as a means of protecting identities of both participants and non-participants (Saunders et al., 2015). Therefore, the reporting of demographic statistics was restricted to stakeholder type. To date, interviewees described here include 45 researchers, staff, and board members representing more than 20 disciplines—many of whom work in multiple disciplines—who are affiliated with the funding institute. Of these, most are tenured, and a very small percentage are postdoctoral researchers or graduate students. In addition, participants include 21 stakeholders unaffiliated with the institute, including administrative, legal, or other government, professional, or advisory roles. A small number of stakeholders are indirectly related to food system research, but most come from within the food system, spanning all areas of food industry including agriculture, agricultural technology or “ag tech,” food packaging and distribution, and food recovery. The semi-structured interview guide for researcher interviews is available in the related article cited above (Alexander et al., 2023). All stakeholder interviewees were asked to describe the AI technologies they were familiar with or use and any challenges they have encountered or anticipate in adoption and use of these technologies. Follow-up questions arose spontaneously in a free-flowing conversation to gather in-depth information about the interviewee's perspectives. Interviews were analyzed inductively and key themes identified, as described by Alexander et al. (2023).

Scenarios were mentioned by a small number of food industry and researcher participants that revealed the problems with liability and adoption discussed in this article. Unrecorded follow-up meetings with researchers and legal professionals provided clarification and additional context. Based on these interviews and conversations, a preliminary codebook of emergent themes was developed by C.A. and then cross-referenced and contextualized within literature on economics and law to test and ground our findings. To ensure rigor and quality, themes were iteratively reviewed, discussed, refined, and agreed on by C.A., A.S., and R.I. This process draws on case study methods to “provide a richness and depth to the description and analysis of the micro events and larger social structures that constitute social life” (Orum et al., 1991) and “confirming…instances of theory” or “how social abstractions, such as concepts and theories, are played out at the level of experience” (McCormick, 1996). Due to the sensitivity of the subject matter and sample size, this study did not provide enough data to clearly indicate the precise balance of variables that tend to discourage or encourage adoption. However, the data and information we received indicate an urgent need for further research to gain more insights on the influence of legal concerns on AI technology adoption, especially in cases where technologies could support public and worker interests.

Results

Findings from interviews suggest that a serious gap between liability law and AI technology adoption may exist or be worsening relative to available technologies for risk management. It is not so much that this gap is new, as that its relative size and impact are changing as data proliferate and AI technologies become available for food system sectors. This impact is changing in three ways: (1) as algorithms increase the possibility of early and precise identification and mapping of food system risks, the potential for mitigating them also increases; (2) the leap from knowing about risks to mitigating them is significant and depends on the probability and severity of adverse effects, cost of mitigation and the incentives of interested parties to mitigate; (3) the responsibility, or potential for being held legally liable, for not mitigating a known risk increases as known risks increase.

Adoption and use of AI technologies is thought to depend, at least in part, on trustworthiness—that is, producing the promised results or insights. However, the interviews reviewed here show that trustworthiness may actually disincentivize adoption of AI technologies. It may not be too much to say that in some cases, the better these algorithms work, the more of a risk they may pose for businesses, and the less likely a business may be to adopt them.

Such a scenario, at the very least, complicates assumptions about needing algorithms that are designed ethically or responsibly. It is not that such standards are not needed, but rather that without evaluating the legal and economic environments into which these technologies are released, standards of “trustworthiness” and reliability will be insufficient to support the adoption of algorithmic tools to identify and increase the capacity for mitigating risk in the public's interests. This raises questions about when and to whom the presumed “safety” benefits are provided by these technologies, which will be discussed below.

Discussion

Much of the research on ethical development and use of AI has focused on issues related to “trustworthy” AI. Some literature supports the pursuit of these standards, while others critique them, but the focus is on the best practices to develop AI so that it prevents bias, ensures data privacy, and generates reliable results that perform as expected (Danks, 2019; Ryan, 2020; Braun et al., 2021; Mökander and Floridi, 2021; McGovern et al., 2022). Both EU and U.S. agencies have adopted these objectives in new and proposed frameworks for guiding the development of AI (European Commission, 2023; Raimondo et al., 2023). Furthermore, a new report by the FDA also demonstrates that the U.S. federal government is interested in adoption of new technologies that improve predictive capacity, reduce illnesses, and increase response times in a way that provides any necessary “confidentiality and proprietary interests” protections (U. S. Food Drug Administration, 2020). The broad goal is to use advanced AI technologies to create more visibility—that is, more knowledge—within the food system, making it safer and more resilient.

In theory, as AI or food regulation is put into place, or for any regulation that currently exists, those who violate the regulations or otherwise produce products or services that fall short of law and industry standards would be held liable. However, even when regulations are in place, they do not always produce the expected results. As Viscusi (2011) states, the “idealized world in which the tort liability system is supposed to produce efficient levels of safety is not how product liability law actually performs.” Viscusi argues that this disparity is due to large, unpredictable liability and insurance costs. He notes that these problems, in particular, tend to make product liability “a barrier to innovations that would reduce accidents” because of the many uncertainties in how courts will handle liability cases. This research is supported by other literature that indicates that legal systems are not functioning as intended or imagined (LoPucki and Weyrauch, 2000; Hopwood et al., 2014). While it is not yet clear whether or to what degree AI technologies would be managed under product liability law (Chagal-Feferkorn, 2019), AI is being used in the production of products that do fall under these laws.

Another study supports the conclusion that liability can influence adoption of technology. Dari-Mattiacci and Franzoni (2014) argue that “negligence rules tend to have a distorting impact on the technological path.” The examples that the authors analyze do not directly consider AI technologies, and relate only to those that would increase automation, not available knowledge. But this study suggests that liability laws can encourage adoption of technologies that reduce harm (to users, workers, or the public) when the “costs of care” for maintaining and using the technology, and the “costs of harm” under liability laws, are cumulatively lower than the costs of using an older technology. This framework has implications for the scenarios considered here, as will be shown in the sections that follow.

Safer when?

The concern we highlight in this article is not that the technology creates more risk. It may or may not do that. Rather, because it provides more information or insights about risks that were already present, it potentially increases the responsibility and expense for mitigating those risks to bar against claims of negligence. The interviews suggest that adopting a technology that provides more knowledge of risk may invoke or trigger a responsibility for expensive risk mitigation (Gormley and Matsa, 2011). This potential increase in costs for risk mitigation may discourage companies from adopting the technology.

There is some debate about how AI will affect the economy (Furman and Seamans, 2019), but the scenario considered here is different than how technology, AI or not, is normally viewed within economics. In this case, it may not affect productivity directly, but rather increase knowledge regarding risks for workers and consumers. AI technologies that assess risks, for example, for the spread of disease among farm or food production workers or the spread of foodborne pathogens in the food supply, are not designed to make production more efficient, although some may improve quality (Shea, 1999). They are primarily designed to precisely map the timing and location of increased risks so that mitigation steps can be considered. In addition, these technologies may not offer direct cost-saving benefits that might compensate for the expense of adoption, as is the case, for instance, with labor-saving technologies that might reduce the number of employees needed.

These factors may create a situation for companies where it is “safer not to know,” and forego using the technology. In fact, the interviews suggest that, in cases where companies are in compliance with current legal standards, legal experts may at times advise their business clients not to adopt newer AI technologies. Then, should harm occur that results in a lawsuit, there will be less evidence to support charges of negligence (Connally, 2009). Seen from this perspective, it may be best, hypothetically, for a company not to investigate or seek more knowledge about risks than they have resources to mitigate. Particularly when these technologies are new enough that they have not become standard throughout an industry, being on the cutting edge of technology could work to the disadvantage of a company compared to its competitors.

Liability law follows various standards (Buzby et al., 2001). As Viscusi (2011) notes, “Under a negligence standard, firms will only be liable for product-related injury costs if the level of safety that they provide is below the legal standard.” Even under a no fault standard, and where workplace causation of a harm or illness may be presumed, such presumptions are rebuttable. For example, several states recently passed laws and governors issued executive orders classifying COVID-19 as an occupational disease, making at least some workers eligible for workers' compensation coverage, but even in such cases, many workers were denied coverage on the basis of causation (Duff, 2022). Liability cases involving compensation for employees have historically been minimal. As a recent legal commentary on workers' compensation observes, “The majority of early workers' compensation laws simply did not contemplate occupational diseases. Because these early statutes were largely a reaction to increasingly common industrial accidents, they were not well-suited to handle the often slower and less detectable onset of occupational diseases” (Moore, 2021). According to Moore, “the denial rate for occupational disease claims can be up to three times higher than the rate for injury claims.” This is due to the fact that employees are required to provide “causal proof that the employee's work materially contributed to the onset of the disease” (Moore, 2021). Particularly in such cases, where causation is defined as “increased risk” rather than “actual” or “proximate” cause (Duff, 2022), introducing a powerful AI tool that allows microbes to be predicted and mapped with new degrees of accuracy and certainty might have the potential to help companies take steps to mitigate risk, but these same tools may also greatly alter available evidence and long-standing practice and precedent in case law, such that adoption is discouraged.

The interviews suggest that companies loosely follow one of two broad approaches. Some companies may adopt as many new technologies as possible, actively digitizing data, and maximizing transparency, risk awareness, and risk mitigation. These companies may be larger and have the resources to take these steps. And they do so with the assumption that these technologies and methods will be the best means to protect themselves against reputational and financial damage (Seo et al., 2013). Such companies may see new digital and AI tools as methods to gain public trust in their brand, which, according to one food industry stakeholder, is a significant goal for most food producers. On the other hand, some companies may delay or choose not to digitize records and adopt these new technologies. This could be due to a lack of resources to fund a conversion from paper records and file cabinets to digital tools that could take years to complete and be very expensive to implement and maintain.

For these or other reasons, companies in many sectors may be hesitant or completely unable to commit to the use of tools that may, in addition to these conversion costs, leave them more exposed to liability. As industry norms shift and all companies throughout an industry begin to adopt certain practices that create expense, these expenses may be built into prices and passed on to consumers. But when competitors are not incurring these expenses or increasing potential exposure to liability, a company may choose to know more at its peril. If these liabilities disproportionately affect smaller companies, these trends may lead to smaller businesses closing with consolidation in the hands of larger and better-resourced companies.

Safer for whom?

The question of whether it is “safer” for a company to adopt or not adopt AI technologies that increase knowledge of risk assumes that the companies are evaluating risk in terms of their own interests. Such a view may sell companies short and overlook genuine efforts to ensure quality and safety. Certainly, as indicated above with the concern for reputation and brand trust, public perception may be damaged if significant harm to public interests or wellbeing were to occur. While evaluating which methods to use or not use in terms of companies' costs certainly may provide benefits to workers and the economy, it also may implicitly prioritize a company's preservation over the interests of individual workers or consumers, or the public.

Limited liability has been noted for its controversial nature, as both a “birthright” of corporations (Rhee, 2010), and a “double-edged sword” that brings with it many economic benefits as well as problems (Simkovic, 2018). Scholars note that if a company adopts methods that may be in the public's interests but that ultimately lead to the company's bankruptcy or failure, then this may not only have a negative effect on that company's employees who will lose their jobs, but will also affect adjacent industries and people (Easterbrook and Fischel, 1985; Rhee, 2010; Simkovic, 2018). The employees of other companies may lose their jobs as investors and companies change course out of concern that their businesses may also come under scrutiny or be held liable in ways that make investment less profitable. Alternatively, surviving firms may become more profitable due to the elimination of competitors. This framing, particularly with regard to AI development and adoption, is sharply critiqued by opponents as grounded in determinism and supporting neoliberal policies that put profit above public welfare (Bourne, 2019; Greene et al., 2019). In other words, there is a tension that exists between those who argue that business interests and survival should be considered as a valid means of preventing broad worker and public harms extending beyond the failure of one business into the rest of the economy, and those who argue that business interests should never be prioritized above the immediate needs and wellbeing of workers of an individual business. This debate is unlikely to end in consensus and courts are left to determine when companies have prioritized their own profits and interests too much at the expense of its workers or public wellbeing.

This discussion highlights the positive externalities that food system AI technologies could bring to society (Lusk and McCluskey, 2018). Because the technology provides potential benefits not only to the company, but also to the public, the company cannot capture enough of the benefit to reap a sufficient return on its investment. The company then has an economic incentive to under-invest in the technology (Chaminade and Edquist, 2010; Hoffmann, 2016; Fuglie, 2018). Viewed in this way, the incentives of companies could be aligned with society by offering subsidies for adoption of the technology. However, especially in cases with unlikely but potentially severe outcomes such as serious illness or death of consumers or workers, adoption subsidies may never be large enough to offset the potential liability risk. This issue tests claims that AI can be developed and used in the interests of the public, because it is precisely such technologies that will be least likely to be adopted.

The changing standards in the use of AI and industry practices will likely change what courts may hold to be “foreseeable harms,” and this might help to encourage companies to adopt more knowledge tools to aid them in getting the best available advice and mitigating risks. As noted in the study by Dari-Mattiacci and Franzoni, courts may alter the “costs of harm” or negligence to encourage rather than discourage adoption of these technologies. As they state, “If the new technology reduces harm substantially, adoption should be encouraged: courts should relax the standard of the new technology and raise that of the old one” (Dari-Mattiacci and Franzoni, 2014). Specifically, they state that in the absence of financial incentives (both from cost savings and compliance and liability laws) to adopt a technology, companies may “disregard the effects of their technological choices on victims, [and] tend to under-adopt harm-reducing technologies” (Dari-Mattiacci and Franzoni, 2014). This being the case, Dari-Mattiacci and Franzoni recommend that for harm-reducing technologies, courts set the costs of liability for companies at a rate that encourages adoption. Also, following Wagner's (1997) proposed interventions, if a company did not use a predictive AI technology that might have shown increased risk of a harm and therefore provided an opportunity for the firm to prevent it, the company could be presumed to be below the standard of care, with liability costs assigned proportionally, or rebuttals of presumption of causation could be denied in favor of the plaintiff. This action would set the risk of not using newer predictive AI technologies above the risk of using them to better align the incentives of companies with those of society.

Likewise, in cases where AI technologies do not directly improve production, but rather provide new knowledge about risk, companies may be more or less likely to adopt these technologies depending on negligence rules adopted by courts. Shavell has shown that courts often attempt to set negligence standards on the assumption that companies have obtained optimal knowledge of risk. However, when courts find it difficult to assess what levels of knowledge about risk are optimal, courts fall back on setting negligence standards according to lower or “customary” levels of knowledge about risk (Shavell, 1992). As Supreme Court Justice Elena Kagan indicated in her well-publicized statement during oral arguments on Section 230 in February of 2023 (Gonzalez, 2023), the introduction of new AI technologies may leave courts unqualified to determine new technological capacities and implications. Courts may therefore set negligence standards at lower than optimal levels. Given that liability for worker illness is often difficult to prove (Moore, 2021), it is likely that liability costs in such cases will remain low, further disincentivizing adoption of AI technologies that provide higher than customary knowledge of risk.

Moving forward

The changes in AI capability are happening rapidly, and adaptation from industries and courts may happen slowly. It may therefore be to the advantage of the public and society to create some type of temporary “on-ramp” for new technologies, for instance, in the first few years that a new use of AI is being tested and adopted. An “on-ramp” would assist companies who would like to begin experimenting with or using AI technologies to explore risks and strategies for mitigating them. Such an approach would also allow courts time to calibrate negligence rulings and make accurate determinations that will encourage companies to obtain optimal knowledge of risk and take appropriate levels of care (Shavell, 1992). An on-ramp may include instituting an “expiration date” for data and knowledge provided by AI—meaning that the predictions are only “usable” for a designated period of time, after which time they are self-destroyed or simply of no legal or other value—or providing resources for assisting smaller companies with digitizing older records where necessary in order to adopt newer digital record-keeping tools. AI technology developers might also build technologies to provide users with knowledge that has an inbuilt level of plausible deniability, such as through differential privacy algorithms that add noise to the data or output to obfuscate the true data (Qian et al., 2022). Additionally, new technologies might provide more than predictions and data, but also cost-effective recommendations for mitigating predicted or identified risks.

Also, there are calls for regulatory “sandboxes” to support the development of AI (Truby et al., 2022). However, these ideas target the possible risks of AI technologies to users, workers, and the public. The blind spot is considering the scenario of hesitance to adopt AI technologies when they could pose a benefit for users, consumers, workers, and the public, but create new costs for companies the beneficiaries are unwilling to pay for. AI regulatory sandboxes should also be extended to support companies in exploring the use of these technologies when the risks are not only to users, workers, and the public, but when risks and costs are also borne by the companies who adopt the technologies. In this case, a company might be permitted to expand what they know (or what is predicted) about workplace and product risks with some additional time, leniency, or support in choosing whether to adopt these technologies permanently and make the necessary modifications to facilities, processes, and training. Those willing to enter this experimental phase might benefit from additional access to resources for permanent implementation. This approach might have the effect not only of facilitating the initial adoption of risk mapping technologies, but may increase access to data and facilities for AI researchers who are working on developing these technologies. Such an on-ramp would support the development of reliable technologies that sustainably support worker, user/consumer, and public wellbeing. Finally, the “sandbox” could support development of legislation, for example, to explicitly recognize the level of certainty in the risk detection/prediction by an AI technology as a factor when evaluating liability claims.

Expecting companies to bear the full burden of adopting AI technologies may be unreasonable when those technologies provide potential harm reduction benefits to society but raise costs and liability risk for companies. Making a short-term or temporary “on-ramp” to facilitate adoption through adjustments to regulation, liability costs, and resources for transitioning to new technologies, may allow a smoother and earlier adoption of these technologies for the best interests of workers, consumers, and the public. This approach would have added benefits for smaller companies who may ultimately be squeezed out by larger companies as more advanced technologies and greater risk mitigation become the standard for reasonable care in food system industries.

Data availability statement

The datasets presented in this article are not readily available because the identity and privacy of participants and confidentiality of interview material, transcripts, and recordings must be protected per agreement with participants. Deidentified data will be made available upon request with permission of participants. Requests to access the datasets should be directed to csalexander@ucdavis.edu.

Ethics statement

The requirement of ethical approval was waived by University of California Davis Institutional Review Board for the studies involving humans because information was not collected about the participants but rather about business and organizational practices. The studies were conducted in accordance with the local legislation and institutional requirements. The Ethics Committee/institutional review board also waived the requirement of written informed consent for participation from the participants or the participants' legal guardians/next of kin because human subject research was not conducted. Nonetheless, consent was obtained from all participants.

Author contributions

CA: Conceptualization, Data curation, Investigation, Methodology, Project administration, Writing – original draft, Writing – review & editing. AS: Conceptualization, Funding acquisition, Project administration, Supervision, Writing – review & editing. RI: Funding acquisition, Writing – review & editing.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was supported by AFRI Competitive Grant no. 2020-67021-32855/project accession no. 1024262 from the USDA National Institute of Food and Agriculture. Partial support was received from the Cornell Institute for Digital Agriculture (CIDA).

Acknowledgments

The authors wish to thank AIFS and the Cornell Institute for Digital Agriculture for their support of this research. The authors thank the interviewed participants for their time and participation.

Conflict of interest

All persons involved in the production of this manuscript are informed and familiar with the provided results and this publication. Financial interests: CA receives a salary as a postdoctoral researcher from AIFS. RI and AS receive partial support from AIFS for participation in its Food Safety, Data Privacy, and Socioeconomics and Ethics research. RI receives partial research support from CIDA. Non-financial interests: RI and AS serve on the Executive Committee for AIFS; RI is CIDA's co-director.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Alexander, C. S., Yarborough, M., and Smith, A. (2023). Who is responsible for ‘responsible AI'?: navigating challenges to build trust in AI agriculture and food system technology. Precision Agric. doi: 10.1007/s11119-023-10063-3 [Epub ahead of print].

CrossRef Full Text | Google Scholar

Bøllingtoft, A. (2007). “Chapter 16: a critical realist approach to quality in observation studies,” in Handbook of Qualitative Research Methods in Entrepreneurship, eds H. Neergaard and C. Leitch (Cheltenham: Edward Elgar Publishing). doi: 10.4337/9781847204387.00027

CrossRef Full Text | Google Scholar

Bourne, C. (2019). AI cheerleaders: public relations, neoliberalism and artificial intelligence. Public Relat. Inq. 8, 109–125. doi: 10.1177/2046147X19835250

CrossRef Full Text | Google Scholar

Braun, M., Bleher, H., and Hummel, P. (2021). A leap of faith: is there a formula for “trustworthy” AI?. Hastings Cent. Rep. 51, 17–22. doi: 10.1002/hast.1207

CrossRef Full Text | Google Scholar

Buzby, J. C., Frenzen, P. D., and Rasco, B. (2001). Product Liability Law As It Applies to Foodborne Illness. Appendix. Product Liability and Microbial Foodborne Illness. Food and Rural Economics Division, Economic Research Service, U.S. Department of Agriculture. Agricultural Economic Report No. 799. Available online at: https://www.ers.usda.gov/webdocs/publications/41289/18932_aer799.pdf

Google Scholar

Chagal-Feferkorn, K. A. (2019). Am I an Algorithm or a Product? When Products Liability Should Apply to Algorithmic Decision-Makers, 30 Stan. L. and Pol'y Rev. 61.

Google Scholar

Chaminade, C., and Edquist, C. (2010). “Rationales for public policy intervention in the innovation process: a systems of innovation approach,” in The Theory and Practice of Innovation Policy: An International Research Handbook (Cheltenham; Gloucester; Northampton, MA: Edward Elgar Publishing), 95–114. doi: 10.4337/9781849804424.00012

CrossRef Full Text | Google Scholar

Connally, E. H. (2009). Good Food Safety Practices: Managing Risks to Reduce or Avoid Legal Liability. Food Safety and Technology, No. 32. CTAHR. Available online at: www.ctahr.hawaii.edu/oc/freepubs/pdf/FST-32.pdf (accessed April 25, 2023).

Google Scholar

Danks, D. (2019). “The value of trustworthy AI,” in Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (New York, NY: Association for Computing Machinery), 521–522. doi: 10.1145/3306618.3314228

CrossRef Full Text | Google Scholar

Dari-Mattiacci, G., and Franzoni, L. A. (2014). Innovative negligence rules. Am. Law Econ. Rev. 16, 333–365. doi: 10.1093/aler/aht021

CrossRef Full Text | Google Scholar

Duff, M. C. (2022). What COVID-19 Laid Bare: Adventures in Workers' Compensation Causation. San Diego Law Review, 59, Saint Louis U. Legal Studies Research Paper No. 2022-08. doi: 10.2139/ssrn.3910154

CrossRef Full Text | Google Scholar

Easterbrook, F. H., and Fischel, D. R. (1985). Limited liability and the corporation. Univ. Chicago Law Rev. 52, 89–117. doi: 10.2307/1599572

CrossRef Full Text | Google Scholar

European Commission (2023). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. Document 52021PC0206. Available online at. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52021PC0206 (accessed April, 13, 2023).

Google Scholar

Fuglie, K. (2018). RandD capital, randd spillovers, and productivity growth in world agriculture. Appl. Econ. Perspect. Policy 40, 421–444. doi: 10.1093/aepp/ppx045

CrossRef Full Text | Google Scholar

Furman, J., and Seamans, R. (2019). AI and the economy. Innov. Policy Econ. 19, 161–191. doi: 10.1086/699936

CrossRef Full Text | Google Scholar

Gavai, A., Bouzembrak, Y., Mu, W., Martin, F., Kaliyaperumal, R., van Soest, J., et al. (2023). Applying federated learning to combat food fraud in food supply chains. NPJ Sci. Food 7, 46. doi: 10.1038/s41538-023-00220-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Gonzalez, V. (2023). Google LLC. No. 21-1333, 45. United States Reports. U.S. Supreme Court. Available online at: https://www.supremecourt.gov/oral_arguments/argument_transcripts/2022/21-1333_f2ag.pdf (accessed July 17, 2023).

Google Scholar

Gormley, T. A., and Matsa, D. A. (2011). Growing out of trouble? corporate responses to liability risk. Rev. Financ. Stud. 24, 2781–2821. doi: 10.1093/rfs/hhr011

CrossRef Full Text | Google Scholar

Greene, D., Hoffmann, A. L., and Stark, L. (2019). “Better, nicer, clearer, fairer: a critical assessment of the movement for ethical artificial intelligence and machine learning,” in Hawaii International Conference on System Sciences (Honolulu, HI: HICSS Conference Office). doi: 10.24251/HICSS.2019.258

CrossRef Full Text | Google Scholar

Hesse-Biber, S. N., and Leavy, P. L. (2010). Handbook of Emergent Methods. New York, NY: Guilford Press.

Google Scholar

Hoffmann, S. A. (2016). “How agricultural and environmental economists can contribute to assuring safe food,” in Invited paper presented at the Australian Agricultural and Resource Economics Society meetings, Canberra, 2–5.

Google Scholar

Hopwood, W., Pacini, C., and Young, G. (2014). Fighting discovery abuse in litigation. Journal of Forensic and Investigative Accounting, 6. Available online at: http://web.nacva.com/JFIA/Issues/JFIA-2014-2_2.pdf

Google Scholar

James, G. (2017). Cul-de-sacs and narrative data analysis – a less than straightforward journey. Qual. Rep. 22, 3102–3117. doi: 10.46743/2160-3715/2017.3163

CrossRef Full Text | Google Scholar

Konur, S., Lan, Y., Thakker, D., Mokryani, G., Polovina, N., and Sharp, J. (2023). Towards design and implementation of Industry 4.0 for food manufacturing. Neural Comput. App. 35, 23753–23765. doi: 10.1007/s00521-021-05726-z

CrossRef Full Text | Google Scholar

LoPucki, L. M., and Weyrauch, W. O. (2000). A theory of legal strategy. Duke Law J. 49, 41–87. doi: 10.2139/ssrn.203491

CrossRef Full Text | Google Scholar

Lowrance, W. W. (1976). Of Acceptable Risk: Science and the Determination of Safety. doi: 10.1149/1.2132690

CrossRef Full Text | Google Scholar

Lusk, J. L., and McCluskey, J. (2018). Understanding the impacts of food consumer choice and food policy outcomes. Appl. Econ. Perspect. Policy 40, 5–21. doi: 10.1093/aepp/ppx054

CrossRef Full Text | Google Scholar

McCormick, B. P. (1996). N = 1: what can be learned from the single case? Leisure Sci. 18, 365–369. doi: 10.1080/01490409609513294

CrossRef Full Text | Google Scholar

McGovern, A., Ebert-Uphoff, I., Gagne, D., and Bostrom, A. (2022). Why we need to focus on developing ethical, responsible, and trustworthy artificial intelligence approaches for environmental science. Environ. Data Sci. 1, E6. doi: 10.1017/eds.2022.5

CrossRef Full Text | Google Scholar

Misra, N. N., Dixit, Y., Al-Mallahi, A., Bhullar, M. S., Upadhyay, R., and Martynenko, A. (2022). IoT, big data, and artificial intelligence in agriculture and food industry. IEEE Int. Things J. 9, 6305–6324. doi: 10.1109/JIOT.2020.2998584

CrossRef Full Text | Google Scholar

Mökander, J., and Floridi, L. (2021). Ethics-based auditing to develop trustworthy AI. Minds Machine. 31, 323–327. doi: 10.1007/s11023-021-09557-8

CrossRef Full Text | Google Scholar

Moore, D. V. (2021). Striking a new grand bargain: workers' compensation as a pandemic social safety net. U. Chi. Legal F. 499, 499–524. doi: 10.2139/ssrn.3834807

CrossRef Full Text | Google Scholar

Orum, A. M., Feagin, J. R., and Sjoberg, G. (1991). “The nature of the case study,” in A Case for the Case Study, eds J. R. Feagin, A. M. Orum, and G. Sjoberg (Chapel Hill, NC: University of North Carolina Press), 1–26.

Google Scholar

Parker, C., Scott, S., and Geddes, A. (2019). “Snowball sampling,” in SAGE Research Methods Foundations, eds P. Atkinson, S. Delamont, A. Cernat, J. W. Sakshaug, R. A. Williams. doi: 10.4135/9781526421036831710

CrossRef Full Text | Google Scholar

Potter, M. E. (1996). Risk assessment terms and definitions. J. Food Prot. 59, 6–9. doi: 10.4315/0362-028X-59.13.6

PubMed Abstract | CrossRef Full Text | Google Scholar

Qian, C., Liu, Y., Barnett-Neefs, C., Salgia, S., Serbetci, O., Adalja, A., et al. (2022). A perspective on data sharing in digital food safety systems. Critic. Rev. Food Sci. Nutr. 1–17. doi: 10.1080/10408398.2022.2103086

PubMed Abstract | CrossRef Full Text | Google Scholar

Raimondo, G. M., Laurie, E., and Locascio. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST AI 100-1. U.S. Department of Commerce. National Institute of Standards and Technology (NIST). doi: 10.6028/NIST.AI.100-1

CrossRef Full Text | Google Scholar

Rhee, R. J. (2010). Bonding Limited Liability. William & Mary Law Review. Available online at: https://scholarship.law.wm.edu/wmlr/vol51/iss4/4

Google Scholar

Ryan, M. (2020). In AI we trust: ethics, artificial intelligence, and reliability. Sci. Eng. Ethics 26, 2749–2767. doi: 10.1007/s11948-020-00228-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Saunders, B., Kitzinger, J., and Kitzinger, C. (2015). Anonymising interview data: challenges and compromise in practice. Qual. Res. 15, 616–632. doi: 10.1177/1468794114550439

PubMed Abstract | CrossRef Full Text | Google Scholar

Schwartz-Shea, P., and Yanow, D. (2011). Designing for Trustworthiness: Knowledge Claims and Evaluations of Interpretive Research. Interpretive research design: Concepts and processes. (New York, NY: Routledge; Taylor and Francis Group). doi: 10.4324/9780203854907

CrossRef Full Text | Google Scholar

Seo, S., Jang, S. S., Miao, L., Almanza, B., and Behnke, C. (2013). The impact of food safety events on the value of food-related firms: an event study approach. Int. J. Hosp. Manag. 33, 153–165. doi: 10.1016/j.ijhm.2012.07.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Shavell, S. (1992). Liability and the incentive to obtain information about risk. J. Legal Stud. 21, 259–270. doi: 10.1086/467907

CrossRef Full Text | Google Scholar

Shea, J. (1999). What do technology shocks do? NBER Macroecon. Ann. 13, 275–322. doi: 10.1086/ma.13.4623748

CrossRef Full Text | Google Scholar

Simkovic, M. (2018). Limited Liability and the Known Unknown, 68 Duke L.J. 275. Available online at: https://scholarship.law.duke.edu/dlj/vol68/iss2/2

Google Scholar

Truby, J., Brown, R., Ibrahim, I., and Parellada, O. (2022). A sandbox approach to regulating high-risk artificial intelligence applications. Eur. J. Risk Regul. 13, 270–294. doi: 10.1017/err.2021.52

CrossRef Full Text | Google Scholar

U. S. Food Drug Administration (2020). New Era of Smarter Food Safety: FDA's Blueprint for the Future. Available online at: https://www.fda.gov/media/139868/download (accessed July 17, 2023).

Google Scholar

Viscusi, W. K. (2011). Does Product Liability Make Us Safer? Vanderbilt Law and Economics Research Paper No. 11–11. doi: 10.2139/ssrn.1770031

CrossRef Full Text | Google Scholar

Wagner, W. E. (1997). Choosing Ignorance in the Manufacture of Toxic Products, 82 Cornell L. Rev. 773. Available online at: https://scholarship.law.cornell.edu/clr/vol82/iss4/2

PubMed Abstract | Google Scholar

Walsham, G. (1995). The emergence of interpretivism in IS research. Inform. Syst. Res. 6, 376–394. doi: 10.1287/isre.6.4.376

CrossRef Full Text | Google Scholar

Worline, M. (2012). Organizational ethnography: studying the complexities of everyday life by Sierk Ybema, Dvora Yanow, Harry Wels, and Frans Kamsteeg. Int. Public Manag. J. 15, 235–238. doi: 10.1080/10967494.2012.702595

CrossRef Full Text | Google Scholar

Yanow, D., and Schwartz-Shea, P. (2009). Interpretive research: characteristics and criteria. Revue internationale de psychosociologie 15, 29–38. doi: 10.3917/rips.035.0029

CrossRef Full Text | Google Scholar

Yarborough, M., Fryer-Edwards, K., Geller, G., and Sharp, R. R. (2009). Transforming the culture of biomedical research from compliance to trustworthiness: insights from nonmedical sectors. Acad. Med. 84, 472–477. doi: 10.1097/ACM.0b013e31819a8aa6

PubMed Abstract | CrossRef Full Text | Google Scholar

Ybema, S., Yanow, D., Wels, H., and Kamsteeg, F. (Eds.) (2009). Organizational Ethnography: Studying the Complexities of Everyday Life. SAGE Publications Ltd. doi: 10.4135/9781446278925

CrossRef Full Text | Google Scholar

Keywords: liability, knowledge, technology adoption, business, machine learning, regulation, AI ethics, economics

Citation: Alexander CS, Smith A and Ivanek R (2023) Safer not to know? Shaping liability law and policy to incentivize adoption of predictive AI technologies in the food system. Front. Artif. Intell. 6:1298604. doi: 10.3389/frai.2023.1298604

Received: 21 September 2023; Accepted: 17 November 2023;
Published: 08 December 2023.

Edited by:

Ruopu Li, Southern Illinois University Carbondale, United States

Reviewed by:

Gary Marchant, Arizona State University, United States
Yuangao Chen, Zhejiang University of Finance and Economics, China

Copyright © 2023 Alexander, Smith and Ivanek. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Carrie S. Alexander, csalexander@ucdavis.edu

ORCID: Carrie S. Alexander orcid.org/0000-0003-4454-9671
Renata Ivanek orcid.org/0000-0001-6348-4709

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.