Generative AI for Cybersecurity: Attack and Defense Strategies

  • 247

    Total downloads

  • 3,532

    Total views and downloads

About this Research Topic

This Research Topic is still accepting articles.

Background

The revolutionary advancements in Artificial Intelligence over the past few years are exemplified by the widespread adoption of Generative AI, across various application domains. Large Language Models (LLMs), and Generative AI in general, have gained popularity due to their capacity to understand, generate, and manipulate human language. Despite their application in areas such as content generation, automated assistance, and data analysis offering considerable economic and operational benefits, the advancing capabilities of Generative AI have also led to serious concerns regarding its potential for misuse. Generative AI can be weaponized in several ways, including the automated generation of highly convincing phishing emails, use as tools for automated penetration testing, or even the creation of malicious code designed to exploit software vulnerabilities. Its ability to replicate human-like language patterns makes it particularly dangerous in the context of cyber-physical systems (CPSs), financial networks, healthcare systems, and other critical infrastructures.

To mitigate the potential misuse of Generative AI, developers have implemented safeguards to prevent these models from responding to unethical inputs. However, despite these efforts, attackers have found ways to exploit Generative AI by employing sophisticated prompt-jailbreaking techniques that circumvent these safeguards, forcing the models to comply with malicious requests. As a result, cybercriminals are increasingly leveraging Generative AI to craft more sophisticated and potentially more dangerous cyberattacks. This evolving threat landscape underscores the need for a deeper understanding of how attackers are manipulating Generative AI and how security professionals can develop effective countermeasures to mitigate these risks.

This Research Topic seeks to explore the dual-use potential of Generative AI, exemplified by the capillary diffusion of LLMs, in both offensive and defensive cybersecurity scenarios, as well as the broader implications for cybersecurity. The objective is to provide a platform for interdisciplinary research, bridging the fields of artificial intelligence, cybersecurity, and social engineering, to foster a comprehensive understanding of the emerging threats posed by Generative AI-based cyberattacks, and the techniques attackers use to bypass the ethical constraints imposed by developers.

The scope of this Research Topic includes, but is not limited to:

· Exploitation of Generative AI for social engineering attacks
· Adversarial attacks against Generative AI-based systems
· The weaponization of Generative AI for generating malicious code or exploiting vulnerabilities
· Prompt jailbreaking techniques to bypass ethical barriers of LLMs
· Ethical and legal implications of LLM misuse in cybersecurity
· Detection and mitigation strategies for Generative AI-driven cyberattacks
· Automated identification of deepfake content generated by Generative AI
· Developing robust, explainable, and interpretable AI models for detecting Generative AI-related threats
· Investigating the impact of Generative AI on the security of critical infrastructure, including CPSs and healthcare systems

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

  • Brief Research Report
  • Clinical Trial
  • Community Case Study
  • Conceptual Analysis
  • Curriculum, Instruction, and Pedagogy
  • Data Report
  • Editorial
  • FAIR² Data
  • FAIR² DATA Direct Submission

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.

Keywords: Cybersecurity, Generative AI Cyberattacks, Weaponization of Large Language Models, Cyberattack Mitigation, Deepfake Detection

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors

Manuscripts can be submitted to this Research Topic via the main journal or any other participating journal.

Impact

  • 3,532Topic views
  • 924Article views
  • 247Article downloads
View impact