Efficient Deep Neural Network For Intelligent Robot System: Focusing On Visual Signal Processing

34.8K
views
46
authors
13
articles
Editors
4
Impact
Loading...
Flowchart of facial expression recognition based on NGO-BILSTM.
2,300 views
5 citations
The pose estimation of the OP network (left), HG network (middle), and HR network (right) after model transfer.
2,530 views
5 citations
Visualization of the weights and feature maps of the first convolutional layer of the VGG16 on the CIFAR100 datasets. The first convolutional layer has 64 filters, and the filters with red bounding boxes are to be pruned. The green boxes are similar channels artificially determined according to the feature map and the convolution kernel, while the green boxes and red boxes are the channels identified and pruned by the algorithm.
2,157 views
2 citations
Qualitative comparisons on real-world images from Fattal (2014). (A) Hazy. (B) DCP. (C) DCPDN. (D) MSBDN. (E) FFA. (F) DA. (G) PSD. (H) RefineD. (I) Ours.
Original Research
01 December 2022

Motivation: Image dehazing, as a key prerequisite of high-level computer vision tasks, has gained extensive attention in recent years. Traditional model-based methods acquire dehazed images via the atmospheric scattering model, which dehazed favorably but often causes artifacts due to the error of parameter estimation. By contrast, recent model-free methods directly restore dehazed images by building an end-to-end network, which achieves better color fidelity. To improve the dehazing effect, we combine the complementary merits of these two categories and propose a physical-model guided self-distillation network for single image dehazing named PMGSDN.

Proposed method: First, we propose a novel attention guided feature extraction block (AGFEB) and build a deep feature extraction network by it. Second, we propose three early-exit branches and embed the dark channel prior information to the network to merge the merits of model-based methods and model-free methods, and then we adopt self-distillation to transfer the features from the deeper layers (perform as teacher) to shallow early-exit branches (perform as student) to improve the dehazing effect.

Results: For I-HAZE and O-HAZE datasets, better than the other methods, the proposed method achieves the best values of PSNR and SSIM being 17.41dB, 0.813, 18.48dB, and 0.802. Moreover, for real-world images, the proposed method also obtains high quality dehazed results.

Conclusion: Experimental results on both synthetic and real-world images demonstrate that the proposed PMGSDN can effectively dehaze images, resulting in dehazed results with clear textures and good color fidelity.

1,798 views
4 citations