Loading…

Visual Explanation of Object Detectors via Saliency Maps

In recent years, rapid progress of deep learning has resulted in object detection algorithms based on deep neural networks nearly completely replacing traditional methods. This shift has led to significant improvements in both accuracy and speed of object detection. However, deep neural networks tha...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiao, Jin, Liu, Wenrui, Hu, Xiaoguang, Jiang, Hao, Wang, Weipeng, Yahang, Li
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In recent years, rapid progress of deep learning has resulted in object detection algorithms based on deep neural networks nearly completely replacing traditional methods. This shift has led to significant improvements in both accuracy and speed of object detection. However, deep neural networks that support decision-making inobject detection are complex black boxes with hidden internal logic and operations. This lack of transparency results in deep learning models being unexplainable, which limits their practical deployment. In this paper, we analyze the current state and future trends of interpretable technology in object detection and propose an effective object detection interpretable algorithm. This algorithm aims to explain the underlying logic of AI decision-making and provide support for the high performance and credibility of object detection algorithms based on deep neural networks through quantitative evaluation indicators.
ISSN:2158-2297
DOI:10.1109/ICIEA58696.2023.10241902