![Man ultramarathon runner in the mountains he trains at sunset](https://d2csxpduxe849s.cloudfront.net/media/E32629C6-9347-4F84-81FEAEF7BFA342B3/0B4B1380-42EB-4FD5-9D7E2DBC603E79F8/webimage-C4875379-1478-416F-B03DF68FE3D8DBB5.png)
94% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
BRIEF RESEARCH REPORT article
Front. Polit. Sci. , 07 February 2025
Sec. Politics of Technology
Volume 7 - 2025 | https://doi.org/10.3389/fpos.2025.1517726
This article is part of the Research Topic Disinformation Countermeasures and Artificial Intelligence View all 6 articles
Disinformation has recently become a subject of widespread concerns across the globe. To combat this issue, various initiatives have emerged, aimed at identifying, tracking, and debunking disinformation. Artificial intelligence (AI) has been incorporated as a tool to counter disinformation, but its implementation has not always been successful and may even be counterproductive. Thus, there is a growing recognition of the need for benchmarking the various ongoing efforts to ensure greater efficacy and coordination in the use of AI and assure that this does not lead to forms of algorithmic censorship. Our goal is to provide a mapping of the projects that use AI to counter disinformation by means of their hyperlink network analysis to shed light on their aims, approaches, and challenges.
The proliferation of digital media and social networks sites has enabled the rapid spread of problematic information (Vosoughi et al., 2018), and traditional approaches to media regulation and censorship seem no longer sufficient to address this challenge (Alemanno, 2018; Marsden et al., 2020). Governments, academia and civil society are haunted by the idea of finding ways to combat this problem, and Artificial intelligence (AI) is increasingly being seen as an appealing tool in this fight. AI has indeed the potential to automate the identification of false or misleading information, which can then be flagged or removed before it can spread widely (Bontridder and Poullet, 2021). Organizations such as the European Union1 and the United Nations2 have launched initiatives to support the development of AI-powered fact-checking tools, while private companies like Meta3 and Google4 have invested in AI to help identify and remove false content from their platforms.
Although these top-down initiatives play an important role in addressing disinformation, they cannot operate alone. Indeed, a growing recognition of bottom-up actions that empower journalists and civil society organizations to combat disinformation is also on the rise (Golovchenko et al., 2018). Fact-checking initiatives have emerged as a critical player in the fight against disinformation, providing a valuable service to citizens and journalists alike (Graves, 2016; Porter and Wood, 2021). Many of these initiatives rely on AI to quickly identify and analyze large volumes of information and, in many cases, also develop their own AI-powered tools to enhance their fact-checking capabilities. As examples, Full Fact,5 a non-profit fact-checking organization, has developed a review system that uses AI to identify claims made in political speeches and news articles and verify their accuracy. Similarly, NewsGuard,6 a US-based company, has launched a tool for training generative AI services to recognize all the significant top false narratives spreading online and to use its ratings of web sources as signals to help both the machines and users of AI models to identify trustworthy news and information.
While there is no single solution to the problem of disinformation, AI has the potential to play a role in mitigating its impact (Kertysova, 2018). However, it is important that these efforts are transparent, carefully designed and implemented step-by-step to ensure that they are effective, and do not inadvertently harm free speech or democratic processes with forms of algorithmic surveillance and censorship (Marsden and Meyer, 2019; Gorwa, 2019). Presently, there has been a surge of research focused on leveraging machine learning to identify and flag false information, predict the virality potential of fake news, and provide comprehensive fact-checking and verification services (Choraś et al., 2021). By utilizing natural language processing (NLP) and machine learning algorithms, AI can analyze the language, sentiment, and structure of social media posts and news articles to detect patterns and identify potentially misleading or false content. AI can be also used to track the spread of disinformation and identify its sources to prevent it from spreading further. AI algorithms can analyze text, images, and videos to identify patterns and anomalies that may indicate disinformation. Furthermore, rather than be used in isolation (a use that is fraught with risks) AI can assist fact-checkers and journalists in verifying information, identify sources, cross-check information, and provide additional context to speed up the fact-checking process and improve accuracy (Bontcheva et al., 2024).
Recent research has demonstrated that cutting-edge language models can provide trustworthiness ratings for a diverse array of news outlets, accompanied by contextual explanations that align closely with human expert judgments (Yang and Menczer, 2023). This suggests a potential uptick in the adoption of these tools by fact-checking organizations in their ongoing efforts (Graves, 2018).
The continued advancement of AI-driven technologies has paved the way for more sophisticated approaches to combat the spread of disinformation. State-of-the-art machine learning models are now capable of discerning increasingly nuanced patterns in content generation and dissemination, allowing for more efficient identification and containment of misleading or false information. These developments hold promise for a future where AI not only aids in flagging and verifying the authenticity of information, but also actively contrasts the propagation of falsehoods using alerts across a diverse set of social media and search engines (Bontcheva et al., 2024). These two different approaches address the problem of disinformation at distinct points in the information ecosystem: the first approach tackles it downstream by identifying and managing misleading content after it has spread. The second approach works upstream, aiming to prevent the proliferation of falsehoods in the first place by delivering proactive interventions.
As these initiatives that use AI continue to evolve, it is essential to integrate them into a broader framework and a common ground. In fact, despite challenges and setbacks7, increasing efforts are invested in AI as an assistant to help identify problematic information and counter the spread of disinformation (Graves, 2018). Taking roots from these promises, the objective of our research is to map the landscape of initiatives that use AI to combat disinformation.
Specifically, in this paper we analyze the hyperlink citation structure of the websites of the initiatives that use AI to fight disinformation. Leveraging on a mix of computational techniques and qualitative insights, we aim to identify and categorize these initiatives, as well as their approaches, goals, and challenges, thus providing a comprehensive and critical state of the art for this emerging field.
In order to map the landscape of AI initiatives in the fight against disinformation we relied on a web mapping approach (Severo and Venturini, 2016). This method operates on the idea that hyperlinks can serve as proxies for social connections. Despite the relatively low cost of creating a hyperlink, it has been consistently observed that web authors are meticulous when establishing connections. They tend to preferentially cite websites that share their thematic or social focus, and avoid citing those with opposing viewpoints, leading to picky organization of the web (Ooghe-Tabanou et al., 2018).
Websites link their discourse to other online discourses to establish hierarchies and clusters, resulting in a network of networks where densely connected zones are separated by relatively empty spaces. These territories correspond to thematic communities, where actors with similar interests and viewpoints gather. By examining the hyperlinks connecting websites dedicated to initiatives that use AI to combat disinformation, we can gain insight into the networks of actors concerned with AI counter-disinformation. In essence, knowledge of which sites are hyperlinked can reveal which actors are likely to be connected in their effort to contrast disinformation through AI.
To create a map adhering to the best practices of hyperlink analysis, we first identified a list of websites that referred to initiatives using or creating AI against disinformation. To construct the list we implemented the following steps:
1. We searched online for pre-existing lists compiled by research or public institutions.
2. We explored these lists8,9,10,11 and found 223 websites directly related to tools, projects or initiatives of counter-disinformation.
3. We selected by manual verification 117 websites12 of initiatives that were actively engaged13 in the development or use of AI against disinformation.
4. We excluded inactive websites, resulting in a final list of 81 websites.
The final list of websites served as the starting point for a web crawling research. Utilizing Hyphe (Jacomy et al., 2016), we extracted all hyperlinks present on the websites of our list with a crawling depth of two (i.e., visiting all the pages that were two clicks away from the starting pages that we had chosen). Based on this information, we established a citation network connecting the webpages on the list. In this network, each website is represented as a node, with edges representing their incoming and outgoing hyperlinks. The final network comprises 81 nodes and 393 links. To explore its hyperlink structure, we exported and analyzed this graph on Gephi (Bastian et al., 2009), and, to correctly interpret and discuss the relationship which emerged within the network, we carried out a systematic reading of all the documents and media contents present on each website contained in the map focusing on the aims, approaches, and challenges highlighted by the very same initiatives.
To properly read and interpret the map shown in Figures 1–3, there are several factors to consider.
Firstly, the position of nodes in space is determined by the Force Atlas 2 algorithm, which considers the strength and type of connections between nodes.14 The closer two nodes are in the visualization, the stronger and more numerous are their direct or indirect connections (Venturini et al., 2021).
Secondly, the heat map superimposed on the network was constructed using Graph Recipes.15 This heat map shows node density, with darker gray gradients indicating higher density and lighter gray gradients representing less dense areas. The heat map is thus used to highlight the different clusters of nodes present in the network.
Third, the size of nodes and their labels are proportional to the total sum of edges entering or leaving the node, which is calculated as their degree.
Finally, node colors are also significant. In Figure 1, colors represent the modularity class of nodes as individuated by Louvain algorithms (Blondel et al., 2008). In Figure 2, colors represent geographical belonging. Finally in Figure 3, colors represent the specific category of the nodes. In this last case blue nodes represent websites related to EU-funded projects, red nodes represent research institutes (including universities and other public or private research centers), aqua blue nodes represent Information Technologies facilities (both public and private), green nodes represent fact-checking agencies, and yellow nodes represent AI tools that can be directly used to detect and counter disinformation.
Figures 1–3 allows exploring the emergence of three distinct topological areas in our network map that mostly overlap also in terms of geographical belonging and actors category.
The largest area is located at the top of the map and is dominated by European counter disinformation initiatives. This cluster is primarily composed of websites associated with Horizon 2020 projects and European research institutes. Furthermore a roughly equal number of nodes represent AI tools and IT facility sites. This latter category is exclusive to the European cluster, as IT facilities are present only in this part of the network.
The second area, located in the middle of the map, is the smallest and least densely populated of the three. This cluster serves as a transitional zone in the network (see Supplementary Table 1 for network metrics) and is primarily composed of US-based research institutes and international think tanks.
Finally, the last area is located at the bottom of the map. This cluster is characterized by the presence of the established fact-checking agencies and AI tools.
Digging deeper into the first two areas on the bottom and in the middle of the network, we can compare the approaches taken in the European Union and the United States. Firstly, the EU primarily involves large national public research centers, while the US primarily involves the academic field and actors financed by big tech. What characterizes the EU is the strict collaboration between projects like Horizon 2020 and (mostly public) IT facilities. In contrast, the peculiarity of the US area is the presence of actors financed by big tech and International institutions; like Microsoft’s research institute for Data & Society, the Atlantic Council’s Digital Forensic Lab and AI for Good summit run by the International Communication Union.
Furthermore, in the EU’s area, there is a majority of large national public research centers engaged in the study and development of AI to counter disinformation. In contrast, in the part of the map dominated by US institutions, it is primarily the academic field that is directly involved in these efforts.
Despite these differences, a close examination of the projects’ descriptions within these two areas reveals that the strategies for developing AI tools are almost identical. In both the websites referring to EU and US areas, AI tools are described as a human-aid rather than a direct solution to the problem of disinformation while focusing primarily on improving the quality of information rather than simply targeting the spread of false information.
For example, various Horizon Europe projects are focused on building AI to navigate the vast sea of digital content and detect signals of potentially dangerous or false content. VERA.ai, building on the previous efforts of WeVerify, utilizes AI and expert crowdsourcing to detect and verify false information, including deep fakes. Projects such as AI4TRUST, REVEAL and InVID focus on developing tools to help journalists and citizens verify authentic contents. Finally, enhancing peer-to-peer moderation, SocialTruth seeks to develop blockchain solutions to counteract the online spread of disinformation, while Provenance is developing an intermediary-free solution for digital content verification that gives greater control to users of social media through AI solutions.
Similarly, in the US dominated area, initiatives such as the collaboration between NYU and Overtone.ai aim to alert readers not only about false information but also on the decontextualization of true stories to warn online users and mitigate the effects of possible negative spillovers, such as the increasing sensationalization of online debate.
Overall, we can thus argue that initiatives of both the EU and US areas focus primarily on improving the quality of information through interventions on the media ecosystem, rather than simply targeting direct disinformation.
In contrast, the area located at the bottom of the map, which mainly consists of fact-checking agencies and AI tools, has a peculiar composition. This cluster is influenced both directly and indirectly by the area located in the United States that deals with legislation and policies. This is applicable not only to central U.S. based companies such as Snopes and PolitiFact but also to several initiatives such as Chequeado and FullFact, which have received funding from Google programs,16 or as for the case of FactCheck.org that collaborated with Meta in the third-party fact-checking program of Facebook. Therefore, the development of AI initiatives against disinformation in this case is closely tied to big tech and digital platforms. Additionally, the development strategy differs significantly from the other two areas. In this sense, fact-checking agencies have a clearly different objective. These agencies use AI tools to verify news ex-post, identify false content and ultimately perform a debunking operation of incorrect information. This use of AI is mostly remedial rather than preventative.
Building on these findings, the division between the dominant approach in the fact-checking cluster and that present in the EU and US areas led us to identify two strategies. The first strategy, mainly pursued through Horizon projects in the EU and through academic research in the US, addresses the issue of disinformation ‘upstream’ and develops AI tools capable of improving the overall quality of information and debate in the media ecosystem. The second strategy, followed by fact-checkers, is perhaps more established and addresses the circulation of disinformation ‘downstream’, seeking to develop and improve the detection potential of AI.
Our mapping illustrates how the use of AI to detect and fight disinformation is distributed in a dense network of different initiatives.
In the EU the innovation and development of AI tools is promoted by public funding, especially the H2020 program. AI tools are thus developed by European projects that are often carried out in partnership with high-education institutions, but are not led by them (an important exception being the University of Sheffield in the UK). In contrast, in the United States, AI tools are developed especially in higher education research environments, particularly in ivy league universities like Harvard and MIT, and supported by private funding. Both groups however have in common the strategy adopted in countering disinformation with AI, which aims at improving the overall quality of the information environment ‘upstream’.
This is the crucial difference that distinguishes research projects from fact-checkers initiatives. These last initiatives, that reside primarily but not exclusively in the US, are developing and making use of AI tools to fight disinformation ‘downstream’, to detect disinformation narratives after their spread. Nevertheless, what is common in both those two different approaches of AI applications in the fight against disinformation is the indispensable human supervision.
Overall, the use of AI in these disinformation detection and mitigation projects presents both opportunities and challenges. On the one hand, AI can enhance the speed and accuracy of detecting and flagging potentially harmful content, allowing for faster responses to disinformation campaigns. AI algorithms can also help identify patterns in the spread of disinformation, aiding in the development of more targeted responses. Additionally, AI can help automate certain aspects of the fact-checking process, potentially reducing the workload on human fact-checkers.
On the other hand, one major challenge is the potential for biases to be encoded into AI algorithms, which could exacerbate existing inequalities, reinforce harmful stereotypes and blur the distinction between disinformation and legitimate speech. In this sense, the issue of adversarial attacks, in which malicious actors attempt to manipulate AI systems by feeding them misleading or incorrect data, is also present.
To conclude, there are also notable gaps in our mapping effort. These gaps may indicate areas where more investigation or research may be needed. One gap is the lack of representation from non-Western countries in the network. Most of the nodes in the map are located in Europe and the United States, with only a few nodes from other regions such as the Chequeado initiative in South-America. This may be due to several factors, including limited funding and resources for AI initiatives in these regions, the issue of structural visibility in the western-driven search engines we queried, as well as different cultural and political contexts that may affect the development and implementation of AI tools to counter disinformation.
Another gap is the limited representation of civil society organizations in the network. While fact-checking agencies are represented, other types of civil society organizations such as media watchdogs and human rights groups are not as prominent. This may be because these organizations have not yet fully explored the potential of AI in their work, or because they face challenges such as limited funding and technical expertise.
Finally, it is interesting to note that most of the inactive websites excluded from the final list we investigated were projects launched in the United States after the election of Donald Trump, meaning during the height of moral panic related to perceived threats such as fake news and the post-truth era. These projects had mainly tried to solve the problem of detecting and moderating false content in a fully automated way, but they clashed with the ethical and practical limitations of such an application of AI.
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
FP: Writing – original draft, Writing – review & editing. TV: Writing – original draft, Writing – review & editing.
The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This work is part of the project “Understanding Misinformation and Science in Societal Debates” (UnMiSSeD) supported by the European Media and Information Fund.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The author(s) declare that no Generative AI was used in the creation of this manuscript.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpos.2025.1517726/full#supplementary-material
1. ^https://ec.europa.eu/research-and-innovation/en/horizon-magazine/can-artificial-intelligence-help-end-fake-news
2. ^https://www.itu.int/hub/2022/05/ai-can-help-fight-disinformation/
3. ^https://about.fb.com/news/2021/12/metas-new-ai-system-tackles-harmful-content/
4. ^https://www.youtube.com/howyoutubeworks/policies/community-guidelines/#detecting-violations
5. ^https://fullfact.org/about/ai/
6. ^https://www.newsguardtech.com/press/launch-of-newsguard-for-ai-training-machines-with-trust-data/
7. ^One such failure occurred during the 2020 US presidential election when Meta employed AI tools to detect and remove false or misleading content, but these tools were not always effective, as in the case of a false claim about election fraud that spread rapidly on Facebook: https://www.nytimes.com/2020/11/23/technology/election-disinformation-facebook-twitter.html
8. ^https://www.rand.org/research/projects/truth-decay/fighting-disinformation/search.html
9. ^https://counteringdisinformation.org/
10. ^https://www.weforum.org/agenda/2022/07/disinformation-ai-technology/
11. ^https://commission.europa.eu/strategy-and-policy/funded-projects-fight-against-disinformation
12. ^As a selection criterion we also look at the embeddedness of different websites. For example, if a project against disinformation contains the sites of universities participating in this project, only the project website is maintained. However, if some universities themselves participating in the project are actively engaged independently in the development or use of AI against disinformation, both websites are maintained.
13. ^To be engaged and thus placed on the list, an initiative must carry out one of the following three activities: (a) develop AI technology and tools; (b) lend infrastructure (e.g., computing services), or data (e.g., lists of dangerous sites), or control work (e.g., training of human-supervised algorithms); (c) systematically use the available AI tools and instruments in a counter-disinformation initiative.
14. ^A force vector layout works according to a physical analogy: nodes receive a repulsive force that pulls them apart, while edges act as springs that bind the nodes they connect. Once launched, the algorithm changes the layout of the nodes until an equilibrium is reached. This balance minimizes the number of line crossings and thus maximizes the readability of the graph. Not only do force vectors minimize line crossings, but they also make sense of the arrangement of nodes in space. In a network spatialized by forces spatial distance acquires meaning: two nodes are closer the more directly or indirectly connected they are (Jacomy et al., 2014). As a consequence network maps spatialized with force vectors sharply visualize clusters and connections.
15. ^https://medialab.sciencespo.fr/en/tools/graph-recipes/
16. ^https://www.poynter.org/fact-checking/2019/these-fact-checkers-won-2-million-to-implement-ai-in-their-newsrooms/
Alemanno, A. (2018). How to counter fake news? A taxonomy of anti-fake news approaches. Eur. J. Risk Regul. 9, 1–5. doi: 10.1017/err.2018.12
Bastian, M., Heymann, S., and Jacomy, M. (2009). Gephi: an open source software for exploring and manipulating networks. In Proceedings of the international AAAI conference on web and social media. ICWSM, San Jose, California
Blondel, V. D., Guillaume, J. L., Lambiotte, R., and Lefebvre, E. (2008). Fast unfolding of communities in large networks. Journal of statistical mechanics: theory and experiment, P10008.
Bontcheva, K., Symeon, P., Filareti, T., Riccardo, G., et al. (2024). Generative AI and disinformation: recent advances, challenges, and opportunities”. European digital media observatory. Available at: https://edmo.eu/edmo-news/new-white-paper-on-generative-ai-and-disinformation-recent-advances-challenges-and-opportunities/ (Accessed October 26, 2024).
Bontridder, N., and Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data Policy 3:e32. doi: 10.1017/dap.2021.20
Choraś, M., Demestichas, K., Giełczyk, A., Herrero, Á., Ksieniewicz, P., Remoundou, K., et al. (2021). Advanced machine learning techniques for fake news (online disinformation) detection: a systematic mapping study. Appl. Soft Comput. 101:107050. doi: 10.1016/j.asoc.2020.107050
Golovchenko, Y., Hartmann, M., and Adler-Nissen, R. (2018). State, media and civil society in the information warfare over Ukraine: citizen curators of digital disinformation. Int. Aff. 94, 975–994. doi: 10.1093/ia/iiy148
Gorwa, R. (2019). The platform governance triangle: Conceptualising the informal regulation of online content. Inter. Policy Rev. 8, 1–22. doi: 10.14763/2019.2.1407
Graves, L. (2016). Deciding what’s true: the rise of political fact-checking in American journalism. New York, NY: Columbia University Press.
Graves, D. (2018). Understanding the promise and limits of automated fact-checking. Oxford: University of Oxford.
Jacomy, M., Girard, P., Ooghe-Tabanou, B., and Venturini, T. (2016). Hyphe, a curation-oriented approach to web crawling for the social sciences. In Proceedings of the international AAAI conference on web and social media. PKP Publishing Services Network: Cologne, Germany
Jacomy, M., Venturini, T., Heymann, S., and Bastian, M. (2014). Force Atlas2, a continuous graph layout algorithm for handy network visualization designed for the Gephi software. PLoS One 9:e98679. doi: 10.1371/journal.pone.0098679
Kertysova, K. (2018). Artificial intelligence and disinformation: how AI changes the way disinformation is produced, disseminated, and can be countered. Sec. Hum. Rights 29, 55–81. doi: 10.1163/18750230-02901005
Marsden, C., and Meyer, T. (2019). Regulating disinformation with artificial intelligence: effects of disinformation initiatives on freedom of expression and media pluralism. Europe: European Parliament.
Marsden, C., Meyer, T., and Brown, I. (2020). Platform values and democratic elections: how can the law regulate digital disinformation? Comput. Law Secur. Rev. 36:105373. doi: 10.1016/j.clsr.2019.105373
Ooghe-Tabanou, B., Jacomy, M., Girard, P., and Plique, G. (2018). Hyperlink is not dead ! In: digital tools & uses congress, Paris. New York: ACM Press.
Porter, E., and Wood, T. J. (2021). The global effectiveness of fact-checking: evidence from simultaneous experiments in Argentina, Nigeria, South Africa, and the United Kingdom. Proc. Natl. Acad. Sci. 118:e2104235118. doi: 10.1073/pnas.2104235118
Severo, M., and Venturini, T. (2016). Intangible cultural heritage webs: comparing national networks with digital methods. New Media Soc. 18, 1616–1635. doi: 10.1177/1461444814567981
Venturini, T., Jacomy, M., and Jensen, P. (2021). What do we see when we look at networks: visual network analysis, relational ambiguity, and force-directed layouts. Big Data Soc. 8:8488. doi: 10.1177/20539517211018488
Vosoughi, S., Roy, D., and Aral, S. (2018). The spread of true and false news online. Science 359, 1146–1151. doi: 10.1126/science.aap9559
Yang, K. C., and Menczer, F. (2023). Large language models can rate news outlet credibility. Available at: https://arxiv.org/abs/2304.00228 (Accessed October 26, 2024).
Keywords: artificial intelligence, disinformation, counter-disinformation, web mapping, hyperlink network analysis
Citation: Pilati F and Venturini T (2025) The use of artificial intelligence in counter-disinformation: a world wide (web) mapping. Front. Polit. Sci. 7:1517726. doi: 10.3389/fpos.2025.1517726
Received: 26 October 2024; Accepted: 14 January 2025;
Published: 07 February 2025.
Edited by:
Paul Vines, Two Six Technologies, United StatesReviewed by:
Ralph Schroeder, University of Oxford, United KingdomCopyright © 2025 Pilati and Venturini. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Federico Pilati, ZmVkZXJpY28ucGlsYXRpMkB1bmliby5pdA==; Tommaso Venturini, dG9tbWFzby52ZW50dXJpbmlAdW5pZ2UuY2g=
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.