Loading…
Deep learning based approach for digitized herbarium specimen segmentation
As herbarium specimens are largely digitized and freely available in online portals, botanists aim to examine their taxonomic aspects to identify the plant specimen regions and generate their morphological data. Nevertheless, different uninformative visual information within the digitized herbarium...
Saved in:
Published in: | Multimedia tools and applications 2022-08, Vol.81 (20), p.28689-28707 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | As herbarium specimens are largely digitized and freely available in online portals, botanists aim to examine their taxonomic aspects to identify the plant specimen regions and generate their morphological data. Nevertheless, different uninformative visual information within the digitized herbarium specimen, such as scale-bar, color pallet, specimen label, envelopes, bar-code, and stamp, represent a source of visual noise. Thus, their identification requires unique detection methods as they are mostly placed at different locations and orientations within the herbarium sheet. Given a collection of digitized herbarium specimen images gathered from the Herbarium Haussknecht of Jena, Germany, we present in this paper a deep learning-based approach for specimen image semantic segmentation. Two different pipelines were involved in this work: (i) coarse segmentation and (ii) fine segmentation. Throughout the whole process, we describe the ground truth annotation used for training our deep learning architecture. The experimental results demonstrate that our proposed model outperforms the other architectures such as SegNet, Squeeze-SegNet, U-Net, and DeepLabv3. Its accuracy achieves 91%Â compared to 82%, 80%, 86%, and 90% obtained by SegNet, Squeeze-SegNet, U-Net, and DeepLabv3, respectively. |
---|---|
ISSN: | 1380-7501 1573-7721 |
DOI: | 10.1007/s11042-022-12935-8 |