Loading…
Reinforced Depth-Aware Deep Learning for Single Image Dehazing
Image dehazing continues to be one of the most challenging inverse problems. Deep learning methods have emerged to complement traditional model-based methods and have helped define a new state of the art in achievable dehazed image quality. However, most deep learning-based methods usually design a...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Image dehazing continues to be one of the most challenging inverse problems. Deep learning methods have emerged to complement traditional model-based methods and have helped define a new state of the art in achievable dehazed image quality. However, most deep learning-based methods usually design a regression network as a black-box tool to either estimate the dehazed image and/or the physical parameters in the haze model, i.e. ambient light (A) and transmission map (t). The inverse haze model may then be used to estimate the dehazed image. In this work, we proposed a Depth-aware Dehazing using Reinforcement Learning system, denoted as DDRL. DDRL generates the dehazed image in a near-to-far progressive manner by utilizing the depth-information from the scene. This contrasts with the most recent learning-based methods that estimate these parameters in one pass. In particular, DDRL exploits the fact that the haze is less dense near the camera and gets increasingly denser as the scene moves farther away from the camera. DDRL consists of a policy network and a dehazing (regression) network. The policy network estimates the current depth for the dehazing network to use. A novel policy regularization term is introduced for the policy network to generate the policy sequence following the near-to-far order. Based on extensive tests over three benchmark test sets, DDRL demonstrates vastly enhanced dehazing results, particularly when training is limited. |
---|---|
ISSN: | 2379-190X |
DOI: | 10.1109/ICASSP40776.2020.9054504 |