
94% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
ORIGINAL RESEARCH article
Front. Imaging
Sec. Imaging Applications
Volume 4 - 2025 | doi: 10.3389/fimag.2025.1504551
This article is part of the Research Topic New Generation of Attacks on Biometric User Authentication Systems. View all articles
The final, formatted version of the article will be published soon.
You have multiple emails registered with Frontiers:
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Deepfakes have become ubiquitous in our modern society and the number of manipulated videos and news stories on the internet are increasing. The current evolution of image generation techniques makes the detection of manipulated content through visible inspection increasingly difficult. This motivated researchers to analyze various signal properties for automatic deepfake detection. Among others, the complexity of heart rate dynamics in the video is used to distinguish between real videos and deepfakes. In particular, these signals are extracted from facial video streams through remote photoplethysmography (rPPG). The current state of the art assumption is that these subtle semantic signals get lost in the deepfake generation process and can thus be used for deepfake detection. This paper presents a study proving that this assumption is no longer valid for the current generation of deepfakes. To this end, we present a pipeline that extracts heart rate signals from the captured rPPG signals and show that current deepfake generation processes encapsulate the rPPG signal of the driving video sequence. For this, we captured a dataset of facial videos in synchrony with an electrocardiogram (ECG) as ground truth pulse signal. For deepfakes generated on this dataset, we show that (i) the deepfake video shows valid heart rate dynamic and that (ii) these match with those of the underlying source videos. Furthermore, we show that this also holds for deepfakes from a publicly available dataset.
Keywords: Deepfakes, Video forensics, Remote Photoplethysmography (rPPG), Biological signals, Remote heart rate estimation, imaging photoplethysmography (IPPG)
Received: 30 Sep 2024; Accepted: 25 Feb 2025.
Copyright: © 2025 Seibold, Wisotzky, Beckmann, Kossack, Hilsmann and Eisert. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Peter Eisert, Fraunhofer HHI, Berlin, Germany
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.