Skip to main content

ORIGINAL RESEARCH article

Front. Artif. Intell.
Sec. AI for Human Learning and Behavior Change
Volume 7 - 2024 | doi: 10.3389/frai.2024.1457247
This article is part of the Research Topic Human-Centered Artificial Intelligence in Interaction Processes View all 5 articles

Evaluating the Role of Generative AI and Color Patterns in the Dissemination of War Imagery and Disinformation on Social Media

Provisionally accepted
Estibaliz García-Huete Estibaliz García-Huete 1*Sara Ignacio-Cerrato Sara Ignacio-Cerrato 2David Pacios Izquierdo David Pacios Izquierdo 3Jose Luis Vazquez-Poletti Jose Luis Vazquez-Poletti 3María José Pérez Serrano María José Pérez Serrano 1Andrea Donofrio Andrea Donofrio 1Clemente Cesarano Clemente Cesarano 4Nikolaos Schetakis Nikolaos Schetakis 5,6Alessio Di Iorio Alessio Di Iorio 7
  • 1 Faculty of Information Sciences, Complutense University of Madrid, Madrid, Spain
  • 2 Department of Optometry and Vision, Faculty of Optics and Optometry, Complutense University of Madrid, Madrid, Spain
  • 3 Department of Computer Architecture and Automation, Faculty of Computer Science and Engineering, Complutense University of Madrid, Madrid, Madrid, Spain
  • 4 Section of Mathematics, International Telematic University Uninettuno, Rome, Italy, Rome, Italy
  • 5 Computational Mechanics and Optimization Laboratory, School of Production Engineering and Management, Technical University of Crete, Chania, Greece
  • 6 Other, Chania, Greece
  • 7 Other, Rome, Italy

The final, formatted version of the article will be published soon.

    This study explores the evolving role of social media in the spread of misinformation during the Ukraine-Russia conflict, with a focus on how artificial intelligence (AI) contributes to the creation of deceptive war imagery. Specifically, the research examines the relationship between color patterns (LUTs) in war-related visuals and their perceived authenticity, highlighting the economic, political, and social ramifications of such manipulative practices. AI technologies have significantly advanced the production of highly convincing, yet artificial, war imagery, blurring the line between fact and fiction. An experimental project is proposed to train a generative AI model capable of creating war imagery that mimics real-life footage. By analyzing the success of this experiment, the study aims to establish a link between specific color patterns and the likelihood of images being perceived as authentic. This could shed light on the mechanics of visual misinformation and manipulation. Additionally, the research investigates the potential of a serverless AI framework to advance both the generation and detection of fake news, marking a pivotal step in the fight against digital misinformation. Ultimately, the study seeks to contribute to ongoing debates on the ethical implications of AI in information manipulation and to propose strategies to combat these challenges in the digital era.

    Keywords: Social Media, disinformation, Generative AI, Color patterns, luts, fake news, war imagery, Information manipulation

    Received: 30 Jun 2024; Accepted: 21 Nov 2024.

    Copyright: © 2024 García-Huete, Ignacio-Cerrato, Pacios Izquierdo, Vazquez-Poletti, Pérez Serrano, Donofrio, Cesarano, Schetakis and Di Iorio. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Estibaliz García-Huete, Faculty of Information Sciences, Complutense University of Madrid, Madrid, Spain

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.