The field of multimedia forensics has become increasingly vital in the digital age, where the authenticity and trustworthiness of multimedia content are paramount. The rapid advancements in AI content production, particularly through Generative Adversarial Networks (GANs) and Diffusion Models, have revolutionized the creation of synthetic images and videos. While these technologies offer remarkable opportunities for entertainment and artistic expression, they also pose significant challenges by enabling the manipulation of information and the spread of disinformation. The general public, inundated with such content via social media and news platforms, finds it increasingly difficult to distinguish between fact and fiction. Recent studies have shown that generative algorithms leave imperceptible traces that can be exploited to detect AI-generated content. However, there remains a critical need for more robust and comprehensive methods to address these challenges and reinforce trust in digital information.
This research topic aims to explore and develop advanced techniques for distinguishing between genuine and synthetic multimedia content. Specifically, it seeks to identify the traces left by generative models within media and develop methods to verify their authenticity. Additionally, the research will examine tools and strategies that enable news agencies and content providers to verify the veracity and provenance of their information amidst sophisticated AI manipulation techniques. By encouraging the creation of comprehensive datasets of both AI-generated and genuine multimedia content, this research aims to provide a solid foundation for training and evaluating new tools and technologies in media forensics and digital content verification.
To gather further insights into the boundaries of multimedia forensics, we welcome articles addressing, but not limited to, the following themes:
- AI techniques for multimedia forensics
- Identification of synthetic visual content generated by AI algorithms
- Algorithms for detecting manipulated media spread across social networks, web, and instant messaging platforms
- Methods to trace the provenance of image and video data
- Tools and applications designed for end-users to detect AI-generated media
This research topic encompasses algorithms and technologies capable of detecting synthetic images and videos generated by AI models, along with data that can be used to train and evaluate such algorithms. We accept papers in the form of Original Research, Methods, Systematic Review, Hypothesis & Theory, Technology and Code, Mini Review, and Data Report.
Keywords:
Multimedia Forensics, Artificial Intelligence, Text-to-Image Detection, Synthetic Images and Videos, DeepFakes
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
The field of multimedia forensics has become increasingly vital in the digital age, where the authenticity and trustworthiness of multimedia content are paramount. The rapid advancements in AI content production, particularly through Generative Adversarial Networks (GANs) and Diffusion Models, have revolutionized the creation of synthetic images and videos. While these technologies offer remarkable opportunities for entertainment and artistic expression, they also pose significant challenges by enabling the manipulation of information and the spread of disinformation. The general public, inundated with such content via social media and news platforms, finds it increasingly difficult to distinguish between fact and fiction. Recent studies have shown that generative algorithms leave imperceptible traces that can be exploited to detect AI-generated content. However, there remains a critical need for more robust and comprehensive methods to address these challenges and reinforce trust in digital information.
This research topic aims to explore and develop advanced techniques for distinguishing between genuine and synthetic multimedia content. Specifically, it seeks to identify the traces left by generative models within media and develop methods to verify their authenticity. Additionally, the research will examine tools and strategies that enable news agencies and content providers to verify the veracity and provenance of their information amidst sophisticated AI manipulation techniques. By encouraging the creation of comprehensive datasets of both AI-generated and genuine multimedia content, this research aims to provide a solid foundation for training and evaluating new tools and technologies in media forensics and digital content verification.
To gather further insights into the boundaries of multimedia forensics, we welcome articles addressing, but not limited to, the following themes:
- AI techniques for multimedia forensics
- Identification of synthetic visual content generated by AI algorithms
- Algorithms for detecting manipulated media spread across social networks, web, and instant messaging platforms
- Methods to trace the provenance of image and video data
- Tools and applications designed for end-users to detect AI-generated media
This research topic encompasses algorithms and technologies capable of detecting synthetic images and videos generated by AI models, along with data that can be used to train and evaluate such algorithms. We accept papers in the form of Original Research, Methods, Systematic Review, Hypothesis & Theory, Technology and Code, Mini Review, and Data Report.
Keywords:
Multimedia Forensics, Artificial Intelligence, Text-to-Image Detection, Synthetic Images and Videos, DeepFakes
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.