AUTHOR=Lan Yunwei , Cui Zhigao , Su Yanzhao , Wang Nian , Li Aihua , Han Deshuai TITLE=Physical-model guided self-distillation network for single image dehazing JOURNAL=Frontiers in Neurorobotics VOLUME=16 YEAR=2022 URL=https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2022.1036465 DOI=10.3389/fnbot.2022.1036465 ISSN=1662-5218 ABSTRACT=Motivation

Image dehazing, as a key prerequisite of high-level computer vision tasks, has gained extensive attention in recent years. Traditional model-based methods acquire dehazed images via the atmospheric scattering model, which dehazed favorably but often causes artifacts due to the error of parameter estimation. By contrast, recent model-free methods directly restore dehazed images by building an end-to-end network, which achieves better color fidelity. To improve the dehazing effect, we combine the complementary merits of these two categories and propose a physical-model guided self-distillation network for single image dehazing named PMGSDN.

Proposed method

First, we propose a novel attention guided feature extraction block (AGFEB) and build a deep feature extraction network by it. Second, we propose three early-exit branches and embed the dark channel prior information to the network to merge the merits of model-based methods and model-free methods, and then we adopt self-distillation to transfer the features from the deeper layers (perform as teacher) to shallow early-exit branches (perform as student) to improve the dehazing effect.

Results

For I-HAZE and O-HAZE datasets, better than the other methods, the proposed method achieves the best values of PSNR and SSIM being 17.41dB, 0.813, 18.48dB, and 0.802. Moreover, for real-world images, the proposed method also obtains high quality dehazed results.

Conclusion

Experimental results on both synthetic and real-world images demonstrate that the proposed PMGSDN can effectively dehaze images, resulting in dehazed results with clear textures and good color fidelity.