AUTHOR=Chen Zhikui , Zhang Xu , Huang Wei , Gao Jing , Zhang Suhua TITLE=Cross Modal Few-Shot Contextual Transfer for Heterogenous Image Classification JOURNAL=Frontiers in Neurorobotics VOLUME=15 YEAR=2021 URL=https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2021.654519 DOI=10.3389/fnbot.2021.654519 ISSN=1662-5218 ABSTRACT=

Deep transfer learning aims at dealing with challenges in new tasks with insufficient samples. However, when it comes to few-shot learning scenarios, due to the low diversity of several known training samples, they are prone to be dominated by specificity, thus leading to one-sidedness local features instead of the reliable global feature of the actual categories they belong to. To alleviate the difficulty, we propose a cross-modal few-shot contextual transfer method that leverages the contextual information as a supplement and learns context awareness transfer in few-shot image classification scenes, which fully utilizes the information in heterogeneous data. The similarity measure in the image classification task is reformulated via fusing textual semantic modal information and visual semantic modal information extracted from images. This performs as a supplement and helps to inhibit the sample specificity. Besides, to better extract local visual features and reorganize the recognition pattern, the deep transfer scheme is also used for reusing a powerful extractor from the pre-trained model. Simulation experiments show that the introduction of cross-modal and intra-modal contextual information can effectively suppress the deviation of defining category features with few samples and improve the accuracy of few-shot image classification tasks.