Loading…

FruityHub: A diverse collection of fruits created for edibility estimation

The export of fresh fruits not only exerts a substantial influence on both national and global economies but also serves as a crucial source of livelihood for a vast number of farmers. Automated estimation of these fruits' edibility (i.e. freshness level) is very essential to maintain the quali...

Full description

Saved in:
Bibliographic Details
Main Authors: Banerjee, Sriparna, Samanta, Bidisha, Chowdhuri, Swati, Chaudhuri, Sheli Sinha
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The export of fresh fruits not only exerts a substantial influence on both national and global economies but also serves as a crucial source of livelihood for a vast number of farmers. Automated estimation of these fruits' edibility (i.e. freshness level) is very essential to maintain the quality of exported fruits and to reduce the financial loss. Owing to the over-whelming performances achieved by deep neural networks in performing image classification tasks in recent years, many researchers have designed automated methods by exploiting the deep neural architectures to perform non-invasive quality identification of several plant parts (leaves, fruits, vegetables, etc.) in more time-efficient and cost-effective manner. The most notable hindrance which is faced by researchers while designing the automated methods is the lack of proper databases which are required to train them. To mitigate this research gap, in this work we have created a novel database namely, FruityHub comprising of a total of 1786 image frames belonging to eight different types of fruits namely, Dragon Fruits, Net Melons, Strawberries, Star Fruit, Zucchini, Mango, Pineapple and Plum. The image frames belonging to each fruit category can be classified as Fresh, Damaged and Severely Damaged which facilitates automated estimation of edible quality of these fruits depending upon their skin color, texture, shape, etc. We have also validated the effectiveness of this database using fourteen deep neural networks namely, AlexNet, Vgg16, ResNet152, MobileNetv2, ShuffleNetv2, SqueezeNet, InceptionV3, InceptionResNetv2, DenseNet121, NasNetLarge, NasNetMobile, GhostNetv2, EfficientNetB7 and Xception. The validation is performed using quantitative metrics like Test Accuracy, Recall, Precision and F1 score.
ISSN:2767-9934
DOI:10.1109/IEMENTech60402.2023.10423480