About this Research Topic
Artificial intelligence is changing the way we create and evaluate information, and this is happening during an infodemic, while all governments and their citizens face the important challenge of disinformation affecting politics, public health (e.g. COVID-19), financial markets and elections. Although the impact of disinformation is intrinsically difficult to quantify and prone to overestimation in the popular consciousness, specific examples such as lynchings catalyzed by disinformation spread over social media highlight that the threat it poses crosses social scales and boundaries. This threat extends into the realm of military combat: as early as 2019, a NATO StratCom experiment established that social media exploitation enabled "successfully influencing [soldiers participating in an exercise to] carry out desired behaviors" such as "leaving their positions, not fulfilling duties, etc," [Bay and Biteniece, 2019]. Thus, information has become a global safety issue, while disinformation can kill. While malign actors weaponized information for years, governments have embedded information as a war-fighting function. The use of AI is essential for scale, speed and automation of offensive and defensive online influence operations.
Machine learning plays a central role in the production of synthetic disinformation and the propagation of dissemination. Bad actors effectively scale disinformation operations by using large language models / GPT-based chatbots, deepfakes, cloned websites, and forgeries. The situation is exacerbated by the proprietary algorithms of search engines and social media platforms, driven by advertising models, that can effectively isolate internet users from alternative information and viewpoints. In fact, social media's business model, with its behavioral tracking algorithms, is arguably optimized for launching a global pandemic of cognitive hacking. The future is clear: "researchers, governments, social platforms, and private actors will be engaged in a continual arms race to influence—and protect from influence—large groups of users online" [Waltzman, 2017].
While machine learning is essential for identifying and inhibiting the spread of disinformation at internet speed and scale, the Disinformation Countermeasures and Artificial Intelligence topic collection will also address other CogSec approaches - such as cognitive resilience building and public-private collaboration models - that draw from multiples disciplines such as cognitive linguistics, psychology, political science and information sciences, and contribute to countering disinformation in a broad sense.
The Disinformation Countermeasures and Artificial Intelligence collection is an important platform for discussing novel trends and research on this topic, and could have significant real-world impact. It aims to foster joint multi-disciplinary work and cross-sectoral knowledge exchange between the AI/ ML, cybersecurity and information science community, researchers and practitioners from public and private sectors, and its crossing with big data, data science, psychology and cognitive linguistics. The collection will provide a platform for disseminating novel trends in CogSec research and their role in the development of secure information environments. It aims to highlight the latest research trends as well as open future research problems related to AI-powered disinformation in cognitive warfare, machine learning techniques to counter disinformation, novel P3 (public-private partnership) models for countering synthetic disinformation, and cross-disciplinary methods of building cognitive resilience to malign influence operations with the use of AI. Most importantly, it aims to promote the application of these emerging disinformation countermeasures in the real world.
In summary, authors are encouraged to submit articles (any article type is welcome e.g. Original Research, Review, Opinion) on the following topics:
• AI / ML-powered countermeasures for disinformation
• Detecting disinformation (text analysis, data mining and natural language processing techniques)
• Deepfake detection
• Bot detection
• Synthetic disinformation production and amplification
• Social media analysis (sentiment analysis, social media mining)
• CogSec models (other than AI/ML-powered disinformation countermeasures)
• Cognitive resilience building approaches
• P3 models to counter synthetic disinformation
• Relevant topics concerning cognitive linguistics and psychology such as cognitive resilience, cognitive biases and safeguards, information processing and critical thinking.
References:
Bay, S. and Biteniece, N. "The current digital arena and its risks to serving military personnel." In Bay, S., et al. Responding to Cognitive Security Challenges. NATO StratCom COE (2019).
Waltzman, R. "The weaponization of information." Testimony before the US Senate Subcommittee on Cybersecurity (2017).
Keywords: Cognitive Security, Disinformation Countermeasures, Machine Learning, Synthetic Disinformation, Cognitive Resilience, Influence Operations, Cyber attacks
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.