Loading…

PEFNet: Position Enhancement Faster Network for Object Detection in Roadside Perception System

Roadside perception is a challenging research area that presents even greater difficulties than vehicle perception. Due to the different locations and angles of cameras, roadside objects exhibit violent multiscale variations, while the vast sensing field introduces more small-scale targets and compl...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2023-01, Vol.11, p.1-1
Main Authors: Huang, Lei, Huang, Wenzhun, Gong, Hai, Yu, Changqing, You, Zhuhong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Roadside perception is a challenging research area that presents even greater difficulties than vehicle perception. Due to the different locations and angles of cameras, roadside objects exhibit violent multiscale variations, while the vast sensing field introduces more small-scale targets and complex backgrounds, making target recognition more challenging. To address these problems, we focus on position information encoding to achieve accurate roadside object detection by proposing the position enhancement faster network (PEFNet). Based on YOLOv6, the FasterNet Block is introduced into Backbone and Neck networks to provide efficient feature extraction while achieving model lightweight transformation. To improve small target detection performance, a position-aware feature pyramid network (PA-PAN) is proposed to enhance position information encoding, and the SPD-Conv is applied in the PA-PAN to further enhance effective feature extraction. Finally, the TSCODE is integrated into the detection head to achieve accurate target recognition and suppress background noise interference. Experiments on the Rope3D and UA-DETRAC datasets show that our model outperforms advanced YOLOv6, YOLOX, and FCOS in roadside object detection. Compared with YOLOv6, our method improves the mAP0.50 on the Rope3D dataset from 78.18% to 82.39%, with the AP of small objects such as pedestrians increasing by 7.01%. Furthermore, PEFNet reduces the weight of the network by 43.1% while maintaining detection speed at 75fps and achieving higher accuracy than previous algorithms for the same number of frames.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3292881