Biometrics, as a means of user authentication, has been proposed to improve security of conventional access control systems based on secret knowledge (e.g., password) and personal possession (e.g., physical key). The use of biometric modalities as access credentials on one hand prevents common frauds such as key theft or exploitation of password weaknesses, but on the other, also admits some inaccuracy when comparing probe and reference samples. Therefore, opens a door for biometrics-specific frauds such as spoofing that imply presenting a fake biometric sample instead of a genuine one to the access control system.
In fact, biometric user authentication can be attacked at many points so that the proper use of biometric authentication requires explicit protection mechanisms including at least presentation attack detection (PAD). Although presentation attacks still pose the highest risk of accepting impostor verification trials, there is a new generation of attacks on biometric systems targeting either the biometric template or communication between an acquisition device and a signal processing unit. The most common example of the former one is the morphing
attack in which an enrollment unit is tricked to accept a weakened template. Then, the weakened template can be successfully matched against probe samples of more than one subject. The latter one is known as camera injection attack, in which the signal from an acquisition device is replaced by a synthetic signal that comes from a malicious actor but has been modified on-the-fly to resemble a genuine user.
Protection from this new generation of attacks is currently the hottest topic in biometrics, especially with the rise of generative models that allow for synthesis of realistic biometric samples.
With the advent of deep convolutional neural networks and due to availability of large datasets for training of recognition models, there is a dramatic gain in recognition performance of modern biometric user authentication systems which is especially eminent for face recognition. However, the recognition performance is usually measured in a friendly environment often ignoring adversarial samples and intentional misuse. How does the recognition performance change if there is a great deal of obfuscation or impersonation attempts? How to ensure that a biometric authentication system is robust to a certain type of attack? How to ensure that the system has no bias towards a certain race, age or gender?
This article collection should give a light on the advances in mitigation of common attacks on biometric authentication systems, as well as the new generation of attacks. It should be useful for developers of commercial
biometric systems, highlighting the potential weaknesses in their systems and providing them with a path to more advanced and secure biometric user authentication solutions.
We welcome both review and research papers addressing one of the following topics, but not limited to:
• Advanced techniques to generate realistic biometric samples incl. variational autoencoders (VAEs), generative adversarial networks (GANs), Stable Diffusion etc.
• Advances in passive and active presentation attack detection
• Face, iris, fingerprint, and voice morphing attacks
• Camera injection attack detection
• Adversarial attacks on biometric matchers
• Model poisoning attacks on neural network driven biometric systems
• Deepfake detection
• Evaluation of biometric system robustness to different kinds of attacks
• Role of synthetic biometric samples in training of more robust recognition models and countering attacks on biometric matchers
• Biases in biometric matchers
• Explainability and interpretability of decisions made by DCNN-based attack detectors
• Industrial perspective on maturity of biometric solutions
• Vulnerabilities in KYC (know your customer) solutions
Keywords:
biometrics, authentication, security, signal processing, cyberattacks
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
Biometrics, as a means of user authentication, has been proposed to improve security of conventional access control systems based on secret knowledge (e.g., password) and personal possession (e.g., physical key). The use of biometric modalities as access credentials on one hand prevents common frauds such as key theft or exploitation of password weaknesses, but on the other, also admits some inaccuracy when comparing probe and reference samples. Therefore, opens a door for biometrics-specific frauds such as spoofing that imply presenting a fake biometric sample instead of a genuine one to the access control system.
In fact, biometric user authentication can be attacked at many points so that the proper use of biometric authentication requires explicit protection mechanisms including at least presentation attack detection (PAD). Although presentation attacks still pose the highest risk of accepting impostor verification trials, there is a new generation of attacks on biometric systems targeting either the biometric template or communication between an acquisition device and a signal processing unit. The most common example of the former one is the morphing
attack in which an enrollment unit is tricked to accept a weakened template. Then, the weakened template can be successfully matched against probe samples of more than one subject. The latter one is known as camera injection attack, in which the signal from an acquisition device is replaced by a synthetic signal that comes from a malicious actor but has been modified on-the-fly to resemble a genuine user.
Protection from this new generation of attacks is currently the hottest topic in biometrics, especially with the rise of generative models that allow for synthesis of realistic biometric samples.
With the advent of deep convolutional neural networks and due to availability of large datasets for training of recognition models, there is a dramatic gain in recognition performance of modern biometric user authentication systems which is especially eminent for face recognition. However, the recognition performance is usually measured in a friendly environment often ignoring adversarial samples and intentional misuse. How does the recognition performance change if there is a great deal of obfuscation or impersonation attempts? How to ensure that a biometric authentication system is robust to a certain type of attack? How to ensure that the system has no bias towards a certain race, age or gender?
This article collection should give a light on the advances in mitigation of common attacks on biometric authentication systems, as well as the new generation of attacks. It should be useful for developers of commercial
biometric systems, highlighting the potential weaknesses in their systems and providing them with a path to more advanced and secure biometric user authentication solutions.
We welcome both review and research papers addressing one of the following topics, but not limited to:
• Advanced techniques to generate realistic biometric samples incl. variational autoencoders (VAEs), generative adversarial networks (GANs), Stable Diffusion etc.
• Advances in passive and active presentation attack detection
• Face, iris, fingerprint, and voice morphing attacks
• Camera injection attack detection
• Adversarial attacks on biometric matchers
• Model poisoning attacks on neural network driven biometric systems
• Deepfake detection
• Evaluation of biometric system robustness to different kinds of attacks
• Role of synthetic biometric samples in training of more robust recognition models and countering attacks on biometric matchers
• Biases in biometric matchers
• Explainability and interpretability of decisions made by DCNN-based attack detectors
• Industrial perspective on maturity of biometric solutions
• Vulnerabilities in KYC (know your customer) solutions
Keywords:
biometrics, authentication, security, signal processing, cyberattacks
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.