Loading…
Practical Implementation of Real-Time Waste Detection and Recycling based on Deep Learning for Delta Parallel Robot
Intelligent robots play an essential role in waste management and recycling due to their high speed and a wide variety of applications. In this paper, two methods for waste detection and accurate pick-and-place based on computer vision and neural networks are presented. The suggested methods have be...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Intelligent robots play an essential role in waste management and recycling due to their high speed and a wide variety of applications. In this paper, two methods for waste detection and accurate pick-and-place based on computer vision and neural networks are presented. The suggested methods have been put into practical application on a 3-DOF Delta parallel robot to show the accuracy and fastness of the foregoing method for real intelligence systems. The first method, Multi-Stage Detection, consists of two stages to detect the waste objects, namely, object localization and segmentation, and classification. The second method, known as One-Stage object detectors, such as YOLOv5, has the capability to simultaneously localize and classify the waste objects. The dataset utilized in this paper relies on the TrashNet dataset as its foundation. In order to improve the classification capabilities in the multi-stage method, a larger dataset was created by utilizing data augmentation. Also, for the one-stage method, a new multi-label dataset is constructed based on the TrashNet dataset. Additionally, the results of the experimental implementation were compared based on time and evaluation metrics for detection and classification. The ResNet50 model achieved the highest accuracy in the multi-stage method, with 99.31% accuracy. In the one-stage detection method, the YOLOv5x model achieved the best mAP (@IoU = 0.75) of 97.4%, which outperformed the YOLOv5s model by 0.8 percent; however, the inference speed of the YOLOv5x in comparison with the YOLOv5s models was six times as slow. Therefore, the YOLOv5s model was employed in real-time online waste detection, which resulted in 82.1% mAP (@IoU = 0.5) after being trained on real images from the waste-sorting platform. |
---|---|
ISSN: | 2643-279X |
DOI: | 10.1109/ICCKE60553.2023.10326225 |