Skip to main content

ORIGINAL RESEARCH article

Front. Commun., 20 November 2024
Sec. Media Governance and the Public Sphere
This article is part of the Research Topic The Impact of Artificial Intelligence on Media, Journalists, and Audiences View all 5 articles

Ethics and journalistic challenges in the age of artificial intelligence: talking with professionals and experts

  • 1Departamento de Ciencias da Comunicación, Universidade de Santiago de Compostela, Santiago de Compostela, Spain
  • 2Department of Information Science and Media Studies, University of Bergen, Bergen, Norway

The rapid advancement of artificial intelligence (AI) is transforming the media industry by automating processes, with applications in data analysis, automated writing, format transformation, content personalization, and fact-checking. While AI integration offers new opportunities in journalism, it also raises ethical concerns around data privacy, algorithmic biases, transparency, and potential job displacement. This study employed qualitative interviews with media professionals and researchers to explore their perspectives on the ethical implications of AI integration in newsrooms. Interview data were analyzed to identify common themes and specific challenges related to AI use in journalism. The findings discuss issues such as the tensions between technology and journalism, ethical challenges related to AI, the evolution of professional roles in journalism, media guidelines, and potential future regulations.

1 Introduction

After more than three decades of digital journalism (Salaverría, 2019), news is going through a period of immense change and challenges. The communication paradigm has shifted from a unidirectional model to a bidirectional model, transforming the dynamics of media outlets, audience consumption, and consequently, business models. Legacy media, once the gatekeepers of information, currently fight with platforms and opinion leaders in an unequal battle, trying to keep their audiences and conquer new ones while fighting a trust crisis that is worsening due to growth in dis- and misinformation. Under these circumstances, newsrooms are exploring new formats and approaching new social media platforms such as TikTok (Vázquez-Herrero et al., 2020). Technological innovation is also underway, with the integration of high-technology solutions such as virtual reality (VR) or artificial intelligence (AI) (López-García and Vizoso, 2021; Pérez-Seijo et al., 2020). Regarding technological innovation, media organizations sometimes focus on “bright, shiny things,” getting carried away by the hype of newness. Consequently, some voices within the journalism environment advocate for a “more critical reflective practice and research-informed approaches” (Posetti, 2018).

In this scenario, AI has emerged as a set of disruptive technologies, including machine learning (such as neural networks) and deep learning, as well as natural language processing (NLP) techniques such as speech and text recognition, analysis, and generation. These advancements are transforming multiple industries, including journalism. This trend has been known in academia by different names as “algorithmic journalism,” “computational journalism” (Díaz-Campo and Chaparro-Domínguez, 2020), “robot journalism” (Fırat, 2019), and one of the most common “automated journalism” (Carlson, 2015; Caswell and Dörr, 2018; Graefe, 2016). AI in the form of spellcheck or photoediting tools has been present in the media for decades. In news automation, one of the first cases was Quakebot, a system that automated quake-related news for The Los Angeles Times using data from the United States Geological Survey (Ufarte Ruiz and Manfredi Sánchez, 2019). However, back then, AI integration in newsrooms was not a general trend, and generative AI was not present.

After the arrival of ChatGPT in November 2022, AI, especially generative AI tools, has entered a steep hype curve. This tool revolutionized how society saw and used AI, democratizing access to these technologies. Several sectors, including journalism, started to experiment with ChatGPT and other generative AI tools, noticing, however, that they have still important limitations (Pavlik, 2023; Gutiérrez-Caneda et al., 2023). Nevertheless, AI generative tools carry the potential to completely revolutionize different knowledge areas because, as Helberger and Diakopoulos (2023, p. 2) explained, “generative AI systems are not built for a specific context or conditions of use, and their openness and ease of control allow for unprecedented scale of use”.

Currently, AI is being integrated into newsrooms on a bigger scale, offering new possibilities in different parts of the news production process and improving efficiency and productivity. Some of the more common uses are data analytics, automated writing of simple news, and content personalization. AI also plays a key role in addressing information disorders and supporting fact-checking initiatives. These technologies are a double-edged sword, facilitating the creation of massive content, including misleading and fake information, but it is also an essential tool when it comes to debunking fake news (Gonçalves et al., 2024; Gutiérrez-Caneda and Vázquez-Herrero, 2024). This progressive automation of news has brought changes in professional routines, journalist profiles, and final products and some concerns, with ethical questions among them. AI integration, especially when talking about generative AI, poses questions related to algorithmic and data bias, transparency of the used models, data privacy, human supervision, or even possible job losses, among others (Al-Zoubi et al., 2024; Cools and Koliska, 2024; Hermann, 2022; Forja-Pena et al., 2024). Even though AI journalism is starting to be mainstream, there are only a few cases where the media disregarded all journalists and implemented an AI-only system, receiving the name “synthetic media” (Ufarte-Ruiz et al., 2023).

In this scenario, media outlets, journalists’ organizations, and researchers have been creating and publishing different guidelines in a transparency exercise and/or offering some guidance to journalists and media (de-Lima-Santos et al., 2024). Public broadcasters such as the BBC or RTVE and other media outlets, like The Guardian, have created guidelines for their journalists but also as a transparency exercise with their audience (BBC, 2024; Corral, 2024; Viner and Bateson, 2024). On the other hand, different organizations, such as research laboratories and other organizations, have created more general documents to provide some guidance and recommendations for journalists and media. For example, press councils, such as the Council for Mass Media in Finland or the Catalan Press Council, have published their own sets of recommendations (Presscouncils.eu, 2020; Ventura-Pocino, 2021). Between research laboratories and scientific groups, JournalismAI, an initiative from the Polis group, at the London School of Economics, offers in collaboration with Google News Initiative, an AI Journalism Starter pack, “a guide designed to help news organizations learn about the opportunities offered by artificial intelligence to support their journalism” (JournalismAI, 2024). In Spain, the initiative Prodigioso Volcán has also published an updated guide for journalists regarding AI. Another key document is the Paris Charter on AI and Journalism, developed by a commission initiated by Reporters Without Borders (Reportiers Sans Frontières, 2023). Its purpose is to safeguard journalistic principles and values in the era of AI. This phenomenon has also attracted the attention of scientists, and these guidelines have been analyzed by academia (de-Lima-Santos et al., 2024). A guide has even been developed to create the perfect AI guideline (Cools and Diakopoulos, 2023).

This interest in AI journalism is not new. Academia has been researching AI journalism, studying its applications and challenges from different perspectives and using different methods for years, first with a more generic approach, and more recently focusing on generative AI (Apablaza-Campos et al., 2024; Lao and You, 2024; Van Dalen, 2024). The academic output in this field is already large enough that literature reviews have been carried out in recent years (García-Orosa et al., 2023; Ioscote et al., 2024). The uses of AI in journalism and its possible consequences have been analyzed through various research studies involving disinformation and fact-checking, the problem of generative AI, and transformations of business models, among other topics (Ioscote et al., 2024). The ethical debates raised by AI integration in newsrooms are also a field explored by academia (Ashok et al., 2022; Hermann, 2022; Krausová and Moravec, 2022; Shi and Sun, 2024).

Within this framework, the present research addresses the ethical question through a qualitative method, analyzing the insights of professionals and researchers in the field with the aim of providing some light on the biggest challenges and possible solutions, the evolving role of journalists, and how AI is shaping the future of media.

2 Materials and methods

The proposed methodological approach for this study involves conducting in-depth interviews with media professionals and researchers in the field (Gaitán Moya and Piñuel Raigada, 1998). The choice of conducting these interviews to address ethics and journalistic vulnerabilities is justified for several reasons. First, this qualitative approach allows for a deep understanding of the experiences and perceptions of those directly working in the field, providing a rich and contextualized perspective. Direct interaction with professionals and experts provides the opportunity to explore specific ethical nuances and practical challenges arising from the use of artificial intelligence in journalism. Interviews enable access to insights unavailable using other methods (Soler, 2011) and allow the capture of diverse voices and opinions, enriching the understanding of ethical impacts from different professional perspectives and contexts. Furthermore, by focusing on dialogues with those directly involved in the implementation of artificial intelligence technologies in newsrooms, emerging trends, common challenges, and potential practical solutions can be identified. This method not only offers real-time insight into the intersection of ethics and technology in journalism but also contributes to building a more robust ethical framework for the ongoing evolution of the field in this digital context.

These in-depth interviews were semi-structured as done in similar research (Cools and Koliska, 2024; Cools and Diakopoulos, 2024). Therefore, for this method, a questionnaire script was made and then adjusted during the interviews. The questions were designed to anticipate how the implementation of AI could positively and negatively affect journalistic practices, considering whether deontological and ethical codes are in line with this new scenario. Additionally, they addressed the role of AI guidelines created by media organizations and the next steps in the AI implementation journey. The questionnaire can be viewed in Appendix. Interviews were conducted via Microsoft Teams or in person and recorded with the consent of interviewees.

The chosen news outlets for this research have already integrated AI into their newsrooms and have experienced its advantages and inconveniences. They were selected not only for their work in this area but also to represent diverse models of media. On the other hand, academic interviewees have been chosen because of their recent study and repercussions in the automated journalism field. A total number of 10 interviews were conducted between March and May 2024. Table 1 shows the selected interviewees.

Table 1
www.frontiersin.org

Table 1. Interviewees profiles.

3 Results

3.1 Integrating AI in newsrooms

During the interviews, professionals and academics showed diverse opinions and concerns regarding the integration of AI in newsrooms. Tensions between technology and journalism, ethical challenges regarding AI, the evolution of professional roles in journalism, media guidelines, and possible further regulations were discussed in these conversations. In addition, some of the interviewees mentioned their worries about how media outlets cover AI-related topics.

AI integration in newsrooms, as it occurs with other innovations in different spaces, can cause a stressful situation among different departments or professionals. During the interviews, tensions between technology and journalism were addressed in different ways by the respondents. Interviewees talked about misunderstandings between different parts of the team, the need for a “translational person,” and the necessity of journalists knowing a little bit about AI. When integrating AI, media buy solutions from third parties and use them just as they are or adapt them, but there are also some cases where media outlets develop their own tools. When it comes to adapting these tools or creating new ones from scratch, multidisciplinary teams are needed. Interviewees relate to this situation and the tensions it provokes and explain that it is caused by misunderstandings between professionals from different fields or backgrounds. The figure of a “translational person” who can help with communications between both parties can be essential in these cases. “I’ve seen really good product development teams that have like an embedded journalist through their whole process who maybe has a little bit of background in technology or like coding and programming skills, and that can be super useful because they can translate between the two communities,” said one of the interviewees. To address this problem, interviewees also mentioned that journalists need to know more about how AI works. “In order for me to be able to drive a car, I don’t have to know about mechanics, but I do have to know the most basic things about mechanics to know if the car is running well, if the car is running badly and if I am driving it properly”, affirmed one of the respondents. As seen, AI integration in newsrooms causes changes in professional profiles and required skills. Related to this, one of the interviewees also mentioned a new role: the “journalist developer.” Similar to a translational person, this role acts as a link between journalism and technology and is played by a professional who not only understands both jobs but also can do both jobs, at least to a certain level. Generative AI has also been helpful in this area, making coding easier and more accessible for non-expert professionals.

3.2 Evolution of professional roles

In these conversations with professionals and researchers, the evolving role of journalists and the future of media outlets were also discussed. One interviewee mentioned, “It seems that artificial intelligence is the thing that is going to end the media, isn’t it? And in the end, (…) the media crisis has been going on for a long time”.

In general, interviewees do not see AI replacing journalists as a general trend in the future, at least not in the near one. “I think there will be journalists who will monitor the work that artificial intelligence does, but I don’t think an AI can do an interview” stated one of them, “That intuition that comes from the experience that comes from years of experience, I don’t think that can ever be replicated by a machine”. However, this interviewee has also stated that this is their opinion at present, in 2024, and that they cannot assure this is not going to change. Another interviewee affirmed that “people liked to get their news from a personal source. Could be a journalist or an influencer or a friend or a celebrity, doesn’t matter, but the idea that somebody is telling me the news”.

In reference to the media ecosystem, inequity between different outlets is also a concern. “Small media that cannot access such tools (referring to AI tools created or adapted to the media outlet) logically have to be much more cautious”, affirmed one of the interviewees in relation to the problems that can appear related to the use of external tools (data privacy, algorithm biases, etc.) and that can be more controlled in tools developed by the media outlet itself.

3.3 Ethical considerations

When asked about the major ethical challenges arising from AI integration in newsrooms, interviewees offered different answers. The mentioned issues were the purpose behind AI integration and really knowing and understanding these technologies, data privacy, technology, and algorithmic dependence, the role of big tech companies (related to what media must know before using their tools and the dependence that can appear), job loss, algorithm biases, and the way media communicate about AI. One of the interviewees categorizes the different concerns in two categories: ethical concerns related to the technology itself (i.e., algorithm bias and technology dependence) and ethical concerns more related to news making (possible misuse of the technology: disinformation, commercialization, sensationalism, and clickbait).

Before anything else, stated one interviewee, choosing to use AI and how this decision is made is also an ethical question. “Deciding whether or not to work with artificial intelligence is a decision that should be based as professionals on a criterion of knowledge or experience of something,” they affirmed. This choice must be made with knowledge about AI and with an ethical and responsible perspective. Choosing to use this set of technologies cannot be justified by mercantilist reasons such as reducing the number of employees or creating more powerful click baits.

Understanding AI is another issue that has been mentioned. Although the term “artificial intelligence” is currently well known in society, this does not mean that people truly understand what it entails. Moreover, this also applies to journalists and newsrooms. “I think the biggest challenge (….) is to really understand what artificial intelligence is. And that means at a more technical level how it works, what it actually does and therefore also understanding what its limits are, what its dangers are and at a high level what it is”, affirmed one interviewee. This professional and researcher also notes that, in general, we do not have a proper understanding of AI. He emphasizes that we are approaching the end of the hype cycle, and we will likely face disappointment because we expect AI to accomplish things beyond its capabilities. This concern about how journalists’ understanding of AI is closely related to how the media represents this technology. Newsrooms have an important role in building social imaginaries, and due to this, they have major responsibility regarding this issue: “This is the biggest challenge, to explain it better and to explain it better, certainly to understand it first”, explains this interviewee. The main problem related to AI representation in the media is anthropomorphism, which contributes to the perception that AI is actually intelligent. This concern was brought up only by one interviewee: “I think that it is not understood that this (AI) is software, that this is statistics. (….) At the level of representation, it is not being understood that we are talking about software and not about a living entity, a person, a thing with its own will, with decision capacity, with intelligence…. No, we are not seeing it well,” explained.

The role of big tech companies is also a concern for professionals and researchers. The enterprises that provide AI solutions for media outlets play an important role when it comes to ethics. They need to be more transparent about the tool development, about the data sets they used to train the models, and regarding “what are they doing with the data journalists introduce in the AI”. Data sets are the ones making a model respond the way it does. They are crucial. If a data set is corrupted or biased, the resulting model will also be affected. Here enters also a data privacy issue: when journalists insert a prompt in an AI tool, what is the AI doing with these data? Is it saved in the tool? Is it erased? “You have to think about what are you sending to the AI. So can I send a person’s name that I’ve interviewed to some training data at OpenAI or Google? I don’t know”, expressed one of the interviewed professionals. These are important concerns that journalists need to be aware of. “The idea of media forcing tech companies is not very possible. It needs a kind of social-political solution”, affirmed one interviewee.

Losing jobs is another concern mentioned by the interviewees. As AI is integrated into newsrooms, interviewees see that some jobs are going to be lost. However, they also considered new ones will be created. In the best scenario, AI will be integrated into the newsrooms automating some processes and saving journalists time, who then could use this time for more complex tasks: “It will help the journalists to be more efficient and, in fact, also improve a lot of their research because they can discover important news that we were not able to discover ourselves before because it would take so many journalists, a lot of workforce to dig through all that information”, stated one interviewee. However, a concern that arises is the aim behind AI integration in newsrooms. One interviewee mentioned the need of “editorial responsibility” within media organizations to prevent job losses. In this case, the ethical responsibility lies with the editorial team (editors, supervisors, etc.) who decide how and why AI will be used, rather than with the individual journalist.

One of the professionals mentioned their worries related to algorithm biases; however, they noted that this was an already solved problem in their case. “The algorithm may have inherited any biases we may have had or passed on when annotating. How did we mitigate this? By having several journalists making the same annotations and then cross-checking the annotations of the various journalists”, they explained.

Talking more specifically about which ethical aspects need to be considered when integrating AI, interviewees showed different opinions. Some of them have a more general approach, considering AI as just another tool. Therefore, according to this approach, journalism ethical codes must be followed like in any other case. Other interviewees have a more specific approach, considering that there are some ethical aspects that are especially important to monitor when integrating AI tools. In the “general approach” group, one of the interviewees mentioned that ethical questions are in every journalistic area and posed the question of why we have this debate regarding AI but not in other journalistic practices, suggesting that the public demands more from automated journalism than from traditional journalism in terms of ethical considerations.

One of the interviewees from the second group mentioned the importance of considering practical issues related to the AI tools on one hand, and that humans are all prone to “laziness” and this can lead to dependence on technology, on the other. Regarding these “practical issues,” interviewees raised several questions that journalists should consider, such as whether the tools they are using are open access, how the tech enterprise will handle the data journalists provide through AI prompts, and the potential for bias, among other concerns. One interviewed professional mentioned the importance of choosing adequate tools, considering “adequate” those that “guarantee privacy are and the absence of polarization, support democracy, they cancel out hate speech, racism, any kind of racial, sexual or linguistic bias (….). We should use technologies to unite, not to divide”. It is also important to mention other interviewees who showed a kind of in-between position, considering that “codes need to be adjusted to this new situation because of AI but the journalism ethics behind everything are the same”.

Journalists’ involvement in AI tool development is seen by interviewees as something positive but not always possible due to not having enough resources. In these cases, media outlets relegate AI solutions to third parties. “If you automate something and it comes from an external company, you don’t know what ethical criteria, in general, that tool is going to have, that is to say, they are not going to give you an instruction manual, (….) you are in their hands,” stated one of the interviewees. As said before, this dependence on third-party solutions leads to a dependence on big tech companies. Due to the volume of the media market, smaller than industries such as pharmaceutical, media outlets often lack the strength to demand more transparency or more power over the models they purchase. In any case, journalists’ vision is seen as something vital to preserve ethical values in journalism when integrating AI solutions. “It has to be an editor which is in charge of using the AI and develop the tools using AI”, [SIC] affirmed one interviewee.

“Laziness” or the lack of supervision by journalists is something mentioned also by other interviewees: even if journalists know they must check and verify the outputs of an AI tool, sometimes, they do not do it. This situation can provoke two problems: one, the dependence on technology, and two, mistakes can occur leading to false information getting published. Dependence on AI makes journalists rely on digital devices and systems for daily functioning and productivity, often leading to reduced self-sufficiency and increased vulnerability to technological disruptions. “AI can hallucinate so it’s really important that journalists understand that they have to treat this source as any other source”, said one interviewee, “they have to check, they have to verify the information coming from this source”. One of the interviewees mentioned another danger related to this technology dependence: “the algorithm can be sort of an echo chamber”.

3.4 Guidelines and further regulation

Interviews showed a positive view of the new AI guidelines media outlets and other organizations are creating but with different reasonings behind this attitude. For example, one of the interviewees declared not being “a big fan of charters and codes and things because I think that journalism is a very contextual situation and that the ethical problems always happen on the border of things, in the kind of grey zone” and that, despite of this, in this case, they affirmed that “it’s quite helpful because the process of drawing up guidelines around the use of AI means that people have to find out about AI”. In addition, related to guidelines, another interviewee considered they are important because they are allowing media to be transparent about how they use AI.

Regulation is a complex issue, especially regarding AI and journalism. In terms of the need for additional AI regulations, interviewees showed very diverse responses: some advocated for more regulation, while others opposed it. Among those advocating for more regulation, one interviewee strongly emphasized the need for a lot of regulation and they pointed out three key aspects related to this: government regulation, self-regulation by big tech companies, and audience education. However, they specify a lack of trust in self-regulation in big tech companies.

Conversations about further regulations also bring the European Union AI Act to the table. “AI act is interesting, but I don’t know how much impact it is going to have”, affirmed one of the interviewees. Interviewees against further regulation maintain that there are better options to guarantee ethical AI journalism such as journalists training or self-regulation. “I don’t think we need more regulation, but I think it’s very important that every media company trains their journalists to understand what AI is, how do we use it, how do we not use it”, affirmed one interviewee.

4 Discussion

4.1 Principal ethical concerns among researchers and professionals

AI is impacting journalism and the media at a critical time of low trust, increasing news avoidance, and declining interest in news. The move away from the conception of journalism as an essentially human field (Peña-Fernández et al., 2023) threatens its future and awakens an aura of skepticism toward digital news.

In addition to these challenges, AI integration poses some ethical concerns that, while related to traditional journalistic ethical codes, must be addressed to adapt to this new landscape (Al-Zoubi et al., 2024; Forja-Pena et al., 2024). These ethical issues have already been studied extensively not only by academia, highlighting the need for continued research and dialogue to address this evolving landscape, but also by media outlets and other organizations, that have been creating related reports and documents (Beckett, 2019; Beckett, 2023).

The ethical concerns related to AI are varied. Some of them are more linked to the nature of the technology itself such as algorithmic biases or technology dependence, while others are more related to consequences of newsroom dynamics as job losses. However, all these concerns share a principal common element: the assurance of adherence to the ethical codes inherent to journalism when integrating AI tools into the newsrooms. Ethical challenges in AI journalism include the need for transparency and accountability from media outlets and tech companies, the possibility of job losses, AI hallucinations, data privacy, technology dependence, algorithmic biases, and the lack of transparency of models.

Job losses. The possibility of journalists and other news professionals losing their jobs continues to be a concern in discussions about AI integration (Murcia-Verdú and Ufarte-Ruiz, 2019; Díaz Noci et al., 2024). These concerns are not only related to the concept of media outlets completely replacing journalists, as synthetic media cases (Ufarte-Ruiz et al., 2023) but also involve partial job losses. Certain roles, particularly those involving mechanical and repetitive tasks, are more likely to be replaced by AI tools. Editorial and enterprise responsibility emerge as possible solutions: AI only should be integrated to facilitate the work of journalists, as any other tool, not to replace them.

Hallucination and human supervision. Generative AI can provide fake or misleading information in a way that can seem trustworthy (Jones, 2023). Therefore, AI-generated content must be supervised all the time. No AI generative content should be published without supervision. AI tools can hallucinate so the correct approach would be to check everything produced by the software before publishing it. The problem here is that humans do not seem to be good supervisors, as the results of this paper show, and when they do this job, they can become “lazy” and not so accurate, according to the results of this research.

Data privacy. Journalists need to be aware of how AI tools are going to use the data they provide through prompts and, to make this possible, big tech companies should be transparent about their models and their data management.

Technology dependence. The use of AI can lead journalists to become more reliant on specific tools, which may result in a limited skill set. Relying on the same technology can lead to certain biases (i.e., as always choosing the same kind of news).

Algorithmic “bias.” Algorithms are created by humans and humans are inherently biased. Therefore, algorithms can have “bias” as human journalists are biased to some degree (Beckett, 2019). The problem is not only the “bias” itself, which can be corrected at some level but also how media will manage it to diminish it. When using always the same tool, if it is biased, this bias will be repeated infinitely. The danger of maintaining or creating biases in AI tools should be properly assessed. When using tools from third parties, the media must ask for all the information, and tech companies should be able to provide it.

Copyright and plagiarism. AI models use other content to be trained and to work. The challenge here is to ensure they will respect copyright and that journalists’ work will not be plagiarized (Díaz Noci et al., 2024; Forja-Pena et al., 2024; Israel and Amer, 2022). However, it is difficult to put a limit: if an AI tool was trained with news from a journalist and the tool rewrites it and publishes it, is it plagiarism? And how can it be prevented?

Algorithmic authorship and accountability. Concerning copyright and plagiarism, AI authorship emerges as another challenge: who is responsible for the content created by an algorithm? Is it really the author? (Israel and Amer, 2022). Human supervision seems to be the requirement here: AI-generated content cannot be published without supervision, and the supervisor (human) would be responsible for the published content.

Lack of transparency. Being transparent about how a newsroom is using AI is key to keeping the public trust. It also helps to avoid misleading situations (i.e., believing an image is real when it is AI-generated). Academia has also researched whether media outlets are being transparent about how they use AI (Cools and Koliska, 2024).

These are the major challenges, from a newsroom perspective when it comes to integrating AI in their work routines. However, there are two related concerns that need to be addressed and that show possible future lines of research:

How media is shaping AI perception. Media outlets are contributing to building AI images. The stories news is telling about these technologies play a vital role in how the audience perceives AI and it is not always as accurate as it should be, for example, humanizing AI tools and treating them as living beings. An ethical approach would be needed here to provide some guidelines that help media inform about AI without being sensationalists or alarmists, for example (Sanguinetti, 2023).

How AI is shaping public opinion. Due to content personalization, users tend to consume always the same kind of content, falling on what is known by cognitive psychologists as “confirmation bias” (Beckett, 2019): they only consume content that reinforces their previous opinions.

4.2 What to consider when integrating AI in a newsroom

Integrating AI in a newsroom is always a distressful situation, as when any other new technology or innovation is integrated. However, due to the characteristics of AI tools, some key aspects should be considered from an ethical perspective: (1) consider whether integrating AI is an ethical and journalistic decision (i.e., AI tools are not going to be used to create misinformation or to pursue clickbait); (2) considering practical issues regarding AI (i.e., which tools are going to be used (external or internal), how are they managing the data, etc.); and (3) ensure that journalists have a foundational understanding of how AI tools work, which includes key aspects such as how to build prompts, the sources of data used by AI models, the potential for biases and hallucinations, and best practices for handling data to protect users privacy.

4.3 Guidelines and regulations

Journalism tends to operate in a gray zone; therefore, it can be difficult to regulate it. Outsider editorial regulation is not seen as the best solution due to inherent characteristics of journalism: a hard and specific regulation can make the work of journalists difficult and a soft regulation is not going to be of use. In this situation, “further self-regulation and the elaboration of ethical principles and guidelines within newsrooms” (Porlezza, 2023) can be a better option. AI guidelines are perceived as a positive action because they allow media outlets to reflect on ethical codes and how AI is going to fit in with them. Furthermore, AI media guidelines are also useful for the public and contribute to ethical AI use in media: they offer transparency on how a newsroom is using AI, providing the audience the information they need in order to properly judge the content they are consuming and also help build trust.

4.4 Collaboration as the key for a better journalism

Media outlets, especially small ones, do not have enough funding to afford their own AI solutions, and depending on third parties is not always the best solution. Collaboration between different newsrooms can facilitate the sharing of AI tools specifically created for journalism. In addition, working together, media outlets can put more pressure on big tech companies to demand more transparency about how they manage data or how they train their models, i.e., fact-checking initiatives have been collaborating in this sense, sharing experiences and sometimes even tools (Gutiérrez-Caneda and Vázquez-Herrero, 2024).

4.5 Looking at the future: AI will not kill journalists

The integration of AI in newsrooms is inevitable at a general level. However, this integration can manifest in various ways. At the time of writing this paper, a scenario where media organizations operate without journalists as a general trend is not considered feasible (Beckett, 2019). There are some media outlets managed solely by AI (synthetic media), and their number is likely to increase, but this is not expected to become the predominant pattern. These technologies are anticipated to be utilized as tools in newsrooms, primarily under human supervision. However, if this does not imply the complete elimination of jobs; AI automation will render certain roles redundant.

AI is not going to substitute journalists completely. Some enterprises will create some kind of media without journalists, but the revenue of these initiatives is not of good quality and cannot be compared, at the moment, with the content provided by “real” media. On the contrary, there will be media that will make its value proposal high-quality human-made journalism.

The integration of artificial intelligence offers new opportunities for journalism and, by reducing the costs of some tasks, it can be convenient for small newsrooms. However, this technology can also cause inequity due different reasons. On one hand, big media outlets will be able to create or co-create their own tools, not being dependent (or at least not as much) on big tech companies. On the other hand, smaller media outlets, with less funding, may be obliged to rely on less reliable options and will have fewer resources for training their workers. This situation can exacerbate the ethical challenges mentioned in this paper such as job losses, data privacy problems, or the lack of deep knowledge in AI.

4.6 Limitations and future research lines

Some limitations need to be considered, such as the number of interviewees and the limited number of countries implied in this research. The results could not be extrapolated to different media scenarios. In future lines, a second set of interviews, with respondents from another continent other than Europe, could help to complete the picture. Furthermore, conducting a Delphi panel would be useful to define and refine the challenges and possible solutions associated with integrating AI into newsrooms.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Ethics statement

Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent from the participants was not required to participate in this study in accordance with the national legislation and the institutional requirements. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

Author contributions

BG-C: Writing – review & editing, Writing – original draft, Methodology, Investigation, Data curation, Conceptualization. C-GL: Writing – review & editing, Writing – original draft, Supervision, Methodology, Conceptualization. JV-H: Writing – review & editing, Writing – original draft, Supervision, Resources, Project administration, Methodology, Investigation, Funding acquisition, Conceptualization.

Funding

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This article is part of the R&D project Digital-native media in Spain: Strategies, competencies, social involvement and (re)definition of practices in journalistic production and diffusion (PID2021-122534OB-C21), funded by MICIU/AEI/10.13039/501100011033 and by “ERDF/EU.” Beatriz Gutiérrez-Caneda holds a predoctoral contract from Xunta de Galicia with reference number ED481A 2022/209.

Acknowledgments

Special thanks are extended to all the individuals who generously shared their time, insights, and expertise through interviews for this article. Their valuable contributions have been essential in providing a deeper understanding of the challenges posed by AI integration in newsrooms.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Al-Zoubi, O., Ahmad, N., and Hamid, N. A. (2024). Artificial intelligence in newsrooms: ethical challenges facing journalists. Stu. Media Commun. 12:6587. doi: 10.11114/smc.v12i1.6587

Crossref Full Text | Google Scholar

Apablaza-Campos, A., Wilches Tinjacá, J. A., and Salaverría, R. (2024). Generative artificial intelligence for journalistic content in Ibero-America: perceptions. Challenges and Reg. Proj. BiD 52, 1–8. doi: 10.1344/bid2024.52.06

Crossref Full Text | Google Scholar

Ashok, M., Madan, R., Joha, A., and Sivarajah, U. (2022). Ethical framework for artificial intelligence and digital technologies. Int. J. Inf. Manag. 62:102433. doi: 10.1016/j.ijinfomgt.2021.102433

Crossref Full Text | Google Scholar

BBC. (2024). Guidance: the use of artificial intelligence. Available at: https://www.bbc.co.uk/editorialguidelines/guidance/use-of-artificial-intelligence (Accessed October 2, 2024).

Google Scholar

Beckett, C. (2019). New powers, new responsibilities. A global survey of journalism and artificial intelligence. The Journalism AI report. Available at: https://drive.google.com/file/d/1utmAMCmd4rfJHrUfLLfSJ-clpFTjyef1/view (Accessed July 14, 2024).

Google Scholar

Beckett, C. (2023). Generating change. A global survey of what news organizations are doing with AI. The journalism AI report. Available at: https://static1.squarespace.com/static/64d60527c01ae7106f2646e9/t/656e400a1c23e22da0681e46/1701724190867/Generating+Change+_+The+Journalism+AI+report+_+English.pdf (Accessed July 14, 2024).

Google Scholar

Carlson, M. (2015). The robotic reporter. Journalism 3, 416–431. doi: 10.1080/21670811.2014.976412

Crossref Full Text | Google Scholar

Caswell, D., and Dörr, K. (2018). Automated journalism 2.0: event-driven narratives. Journal. Pract. 12, 477–496. doi: 10.1080/17512786.2017.1320773

Crossref Full Text | Google Scholar

Cools, H., and Diakopoulos, N. (2023). Towards guidelines for guidelines on the use of generative AI in newsrooms. Medium. Available at: https://generative-ai-newsroom.com/towards-guidelines-for-guidelines-on-the-use-of-generative-ai-in-newsrooms-55b0c2c1d960 (Accessed July 14, 2024).

Google Scholar

Cools, H., and Diakopoulos, N. (2024). Uses of generative AI in the newsroom: mapping journalists’perceptions of perils and possibilities. J. Pract. 27, 1–19. doi: 10.1080/17512786.2024.2394558

Crossref Full Text | Google Scholar

Cools, H., and Koliska, M. (2024). News automation and algorithmic transparency in the newsroom: the case ofthe Washington post. Journal. Stud. 25, 662–680. doi: 10.1080/1461670X.2024.2326636

Crossref Full Text | Google Scholar

Corral, D. (2024). IA En Un Gran Medio Público, Avanzando Con Certeza y Garantías Entre La Incertidumbre. RTVEIA. Available at: https://www.rtveia.es/publicaciones.

Google Scholar

De la Hoz, K., Coelho, F., and Prodigioso, Volcán (2023). IA para periodistas. Available at: https://www.prodigiosovolcan.com/sismogramas/ia-periodistas/

Google Scholar

de-Lima-Santos, M.-F., Yeung, W. N., and Dodds, T. (2024). Guiding the way: a comprehensive examination of AI guidelines in global media. AI and Soc. 15, 1–19. doi: 10.1007/s00146-024-01973-5

Crossref Full Text | Google Scholar

Díaz-Campo, J., and Chaparro-Domínguez, M. A. (2020). Periodismo computacional y ética. Revista ICONO 14. 18, 10–32. doi: 10.7195/ri14.v18i1.1488

Crossref Full Text | Google Scholar

Díaz Noci, J., Peña-Fernández, S., Meso-Ayerdi, K., and Larrondo-Ureta, A. (2024). The influence of AI in the media workforce: how companies use an Array of legal remedies. Trípodos. 55, 33–54. doi: 10.51698/tripodos.2024.55.03

Crossref Full Text | Google Scholar

Fırat, F. (2019). “Robot journalism”, in the international encyclopedia of journalism studies. Int. Encycl. J. Stu. 14, 1–5. doi: 10.1002/9781118841570.iejs0243

Crossref Full Text | Google Scholar

Forja-Pena, T., García-Orosa, B., and López-García, X. (2024). The ethical revolution: challenges and reflections in the face of the integration of artificial intelligence in digital journalism. Commun. Soc. 27, 237–254. doi: 10.15581/003.37.3.237-254

Crossref Full Text | Google Scholar

Gaitán Moya, J., and Piñuel Raigada, J. (1998). Técnicas de investigación en comunicación social.

Google Scholar

García-Orosa, B., Canavilhas, J., and Vázquez-Herrero, J. (2023). Algorithms and communication: a systematized literature review. Comunicar 31, 9–21. doi: 10.3916/C74-2023-01

Crossref Full Text | Google Scholar

Gonçalves, A., Torre, L., Oliveira, F., and Jerónimo, P. (2024). AI and automation’s role in Iberian fact-checking agencies. Prof. Inf. 33:212. doi: 10.3145/epi.2024.0212

Crossref Full Text | Google Scholar

Graefe, A. (2016). Guide to automated journalism. Columbia Journalism Review. Available at: https://www.cjr.org/tow_center_reports/guide_to_automated_journalism.php/ (Accessed July 14, 2024).

Google Scholar

Gutiérrez-Caneda, B., and Vázquez-Herrero, J. (2024). Redrawing the lines against disinformation: how AI is shaping the present and future of fact-checking. Tripodos 55:4. doi: 10.51698/tripodos.2024.55.04

Crossref Full Text | Google Scholar

Gutiérrez-Caneda, B., Vázquez-Herrero, J., and López-García, X. (2023). AI application in journalism: ChatGPT and the uses and risks of an emergent technology. Prof. Inf. 32:14. doi: 10.3145/epi.2023.sep.14

Crossref Full Text | Google Scholar

Helberger, N., and Diakopoulos, N. (2023). ChatGPT and the AI act. Int. Policy Rev. 12:1682. doi: 10.14763/2023.1.1682

Crossref Full Text | Google Scholar

Hermann, E. (2022). Artificial intelligence and mass personalization of communication content—an ethical and literacy perspective. New Media Soc. 24, 1258–1277. doi: 10.1177/14614448211022702

Crossref Full Text | Google Scholar

Ioscote, F., Gonçalves, A., and Quadros, C. (2024). Artificial intelligence in journalism: a ten-year retrospective of scientific articles (2014-2023). J. Media 5, 873–891. doi: 10.3390/journalmedia5030056

Crossref Full Text | Google Scholar

Israel, M. J., and Amer, A. (2022). Rethinking data infrastructure and its ethical implications in the face of automated digital content. AI Ethics 3, 427–439. doi: 10.1007/s43681-022-00169-1

Crossref Full Text | Google Scholar

Jones, B. (2023). Generative AI and journalism. A rapid risk-based review. Edinburgh: Edinburgh Research Explorer.

Google Scholar

JournalismAI. (2024). AI journalism starter pack. Available at: https://www.skeyesmedia.org/documents/bo_filemanager/AI-journalism-Starter-Pack-_-A-guide-by-JournalismAI.pdf (Accessed July 14, 2024).

Google Scholar

Krausová, A., and Moravec, V. (2022). Disappearing authorship: ethical protection of AI-generated news from the perspective of copyright and other Laws. JIPITEC. 13:132.

Google Scholar

Lao, Y., and You, Y. (2024). Unraveling generative AI in BBC News: application, impact, literacy and governance. TGPPP. doi: 10.1108/TG-01-2024-0022 [Epub ahead of print].

Crossref Full Text | Google Scholar

López-García, X., and Vizoso, A. (2021). Periodismo de alta tecnología: signo de los tiempos digitales del tercer milenio. EPI 30:1. doi: 10.3145/epi.2021.may.01

Crossref Full Text | Google Scholar

Murcia-Verdú, F. J., and Ufarte-Ruiz, M. J. (2019). Mapa de riesgos del periodismo hi-tech 18, 47–55. doi: 10.31009/hipertext.net.2019.i18.05

Crossref Full Text | Google Scholar

Pavlik, J. (2023). Collaborating with ChatGPT: considering the implications of generative artificial intelligence for journalism and media education. J. Mass Commun. Educ. 78, 84–93. doi: 10.1177/10776958221149577

Crossref Full Text | Google Scholar

Peña-Fernández, S., Meso-Ayerdi, K., Larrondo-Ureta, A., and Diaz-Noci, J. (2023). Without journalists, there is no journalism: the social dimension of generative artificial intelligence in the media. Prof. Inf. 32:27. doi: 10.3145/epi.2023.mar.27

Crossref Full Text | Google Scholar

Pérez-Seijo, S., Gutiérrez-Caneda, B., and López-García, X. (2020). Periodismo digital y alta tecnología: de la consolidación a los renovados desafíos. Index Comun. 10, 129–151. doi: 10.33732/ixc/10/03Period

Crossref Full Text | Google Scholar

Porlezza, C. (2023). Promoting responsible AI: a European perspective on the governance of artificial intelligence in media and journalism. Communications 48:091. doi: 10.1515/commun-2022-0091

Crossref Full Text | Google Scholar

Posetti, J. (2018). Time to step away from the ‘bright, shiny things’? Towards a sustainable model of journalism innovation in an era of perpetual change. J. Innov. Proj. 1, 1–30. doi: 10.60625/risj-kmpg-q993

Crossref Full Text | Google Scholar

Presscouncils.eu. (2020). The use of Algorithms and Artificial Intelligence in media outlets presscouncils.Eu. Available at: https://www.presscouncils.eu/the-use-of-algorithms-artificial-intelligence-in-media-outlets/ (Accessed October 2, 2024).

Google Scholar

Reportiers Sans Frontières. (2023). Paris charter on AI and journalism. Available at: https://rsf.org/sites/default/files/medias/file/2023/11/Paris%20charter%20on%20AI%20in%20Journalism.pdf (Accessed October 2, 2024).

Google Scholar

Salaverría, R. (2019). Digital journalism: 25 years of research. EPI 28:1. doi: 10.3145/epi.2019.ene.01

Crossref Full Text | Google Scholar

Sanguinetti, P. (2023). Tecnohumanismo. Madrid: La Huerta Grande Editorial.

Google Scholar

Shi, Y., and Sun, L. (2024). How generative AI is transforming journalism: development. Appl. Ethics J. Medi. 5, 582–594. doi: 10.3390/journalmedia5020039

Crossref Full Text | Google Scholar

Soler, P., (2011). “La investigación cualitativa. Un enfoque integrador,” in La investigación en comunicación; métodos y técnicas en la era digital, coord. L. Vilches Manterola, 189–236.

Google Scholar

Ufarte Ruiz, M. J., and Manfredi Sánchez, J. L. (2019). Algorithms and bots applied to journalism. The case of Narrativa Inteligencia artificial: structure, production and informative quality. Doxa Comun. 2019, 213–233. doi: 10.31921/doxacom.n29a11

Crossref Full Text | Google Scholar

Ufarte-Ruiz, M. J., Murcia-Verdú, F. J., and Túñez-López, J. M. (2023). Use of artificial intelligence in synthetic media: first newsrooms without journalists. Prof. Inf. 32:2. doi: 10.3145/epi.2023.mar.03

Crossref Full Text | Google Scholar

Van Dalen, A. (2024). Revisiting the algorithms behind the headlines. HowJournalists respond to professional competition ofGenerative AI. J. Prac. 14, 1–18. doi: 10.1080/17512786.2024.2389209

Crossref Full Text | Google Scholar

Vázquez-Herrero, J., Negreira-Rey, M. C., and López-García, X. (2020). Let’s dance the news! How the news media are adapting to the logic of TikTok. Journalism 23, 1717–1735. doi: 10.1177/1464884920969092

Crossref Full Text | Google Scholar

Ventura-Pocino, P. (2021). Algorithms in the newsrooms: Challenges and recommendations for artificial intelligence with the ethical values of journalism. Barcelona: Catalan Fundació Consell de la Información de Catalunya/Press Council.

Google Scholar

Viner, K., and Bateson, A. (2024). The Guardian’s approach to generative AI | | the Guardian. The Guardian. Available at: https://www.theguardian.com/help/insideguardian/2023/jun/16/the-guardians-approach-to-generative-ai (Accessed October 2, 2024).

Google Scholar

Appendix

Interview model

Integrating AI in the newsrooms

1. Which are the tensions between technology and journalism practice regarding AI?

2. Are there any tensions between different departments/strategies within the organizations related to AI integration? Which ones?

Professional roles

1. Is it necessary to involve journalists in the development of AI solutions to ensure that the core values of journalism are included? How should they be involved in this process?

2. In light of advancing AI technologies, how do you foresee the evolving role of journalists in newsrooms where AI plays an increasingly prominent role? Do you believe there will be a point in the future where AI could autonomously handle the entire news production process, potentially leading to media outlets operating without human involvement on a regular basis? If so, how do you envision journalists adapting to this paradigm shift?

Ethical considerations

1. Considering the ethical considerations inherent in journalism, do you believe that as human involvement in the process diminishes, the values of integrity, accuracy, and accountability may also decline in AI-driven media?

2. What are the biggest challenges of AI integration in newsrooms from an ethical perspective?

3. What ethical aspects should be considered when integrating AI tools into journalistic routines?

4. Have you ever experienced some of these challenges or any other related to the use of AI working as a journalist? Which ones? Can you explain them?

Guidelines and further regulation

1. Almost all media outlets, but also press councils and journalists’ associations are developing specific ethical guides for the use of AI in the production of journalistic content. Do you think these guides are necessary to ensure integrity and ethics in the journalistic profession? Are they enough or is it necessary to have another kind of regulation?

2. If these guides are not enough, what measures or policies do you think should be implemented to promote greater transparency and accountability in the use of AI in the media from an ethical perspective? Who should take the initiative together with journalistic organizations?

Keywords: AI, AI journalism, ethics, AI guidelines, algorithmic journalism

Citation: Gutiérrez-Caneda B, Lindén C-G and Vázquez-Herrero J (2024) Ethics and journalistic challenges in the age of artificial intelligence: talking with professionals and experts. Front. Commun. 9:1465178. doi: 10.3389/fcomm.2024.1465178

Received: 15 July 2024; Accepted: 14 October 2024;
Published: 20 November 2024.

Edited by:

Simón Peña-Fernández, University of the Basque Country, Spain

Reviewed by:

Javier Díaz-Noci, Pompeu Fabra University, Spain
Ana Serrano Tellería, University of Castilla La Mancha, Spain

Copyright © 2024 Gutiérrez-Caneda, Lindén and Vázquez-Herrero. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Beatriz Gutiérrez-Caneda, YmVhdHJpei5ndXRpZXJyZXouY2FuZWRhQHVzYy5lcw==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.