Loading…

An Adaptive Threshold for the Canny Algorithm With Deep Reinforcement Learning

The Canny algorithm is widely used for edge detection. It requires the adjustment of parameters to obtain a high-quality edge image. Several methods can select them automatically, but they cannot cover the diverse variations on an image. In the Canny algorithm, we need to set values of three paramet...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2021, Vol.9, p.156846-156856
Main Authors: Choi, Keong-Hun, Ha, Jong-Eun
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The Canny algorithm is widely used for edge detection. It requires the adjustment of parameters to obtain a high-quality edge image. Several methods can select them automatically, but they cannot cover the diverse variations on an image. In the Canny algorithm, we need to set values of three parameters. One is related to smoothing window size, and the other two are the low and high threshold. In this paper, we assume that the smoothing window size is fixed to a predefined size. This paper proposes a method to provide adaptive thresholds for the Canny algorithm, which operates well on images acquired under various variations. We select optimal values of two thresholds adaptively using an algorithm based on the Deep Q-Network (DQN). We introduce a state model, a policy model, and a reward model to formulate the given problem in deep reinforcement learning. The proposed method has the advantage that it can adapt to a new environment using only images without labels, unlike the existing supervised way. We show the feasibility of the proposed algorithm by diverse experimental results.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2021.3130132