AUTHOR=Li Yi-Fan , Ying Haojiang TITLE=Disrupted visual input unveils the computational details of artificial neural networks for face perception JOURNAL=Frontiers in Computational Neuroscience VOLUME=16 YEAR=2022 URL=https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2022.1054421 DOI=10.3389/fncom.2022.1054421 ISSN=1662-5188 ABSTRACT=Background

Convolutional Neural Network (DCNN), with its great performance, has attracted attention of researchers from many disciplines. The studies of the DCNN and that of biological neural systems have inspired each other reciprocally. The brain-inspired neural networks not only achieve great performance but also serve as a computational model of biological neural systems.

Methods

Here in this study, we trained and tested several typical DCNNs (AlexNet, VGG11, VGG13, VGG16, DenseNet, MobileNet, and EfficientNet) with a face ethnicity categorization task for experiment 1, and an emotion categorization task for experiment 2. We measured the performance of DCNNs by testing them with original and lossy visual inputs (various kinds of image occlusion) and compared their performance with human participants. Moreover, the class activation map (CAM) method allowed us to visualize the foci of the “attention” of these DCNNs.

Results

The results suggested that the VGG13 performed the best: Its performance closely resembled human participants in terms of psychophysics measurements, it utilized similar areas of visual inputs as humans, and it had the most consistent performance with inputs having various kinds of impairments.

Discussion

In general, we examined the processing mechanism of DCNNs using a new paradigm and found that VGG13 might be the most human-like DCNN in this task. This study also highlighted a possible paradigm to study and develop DCNNs using human perception as a benchmark.