Loading…

Contextual based hybrid classification with FCM to handle mixed pixels and edge preservation

This research paper introduces an innovative method for land cover classification in satellite imagery, specifically designed to address the challenges posed by mixed pixels and edge preservation. Traditional classification methods struggle to accurately classify these areas, leading to misclassific...

Full description

Saved in:
Bibliographic Details
Published in:International journal of information technology (Singapore. Online) 2024-08, Vol.16 (6), p.3537-3547
Main Authors: Swati Vishnoi, Pareek, Meenakshi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This research paper introduces an innovative method for land cover classification in satellite imagery, specifically designed to address the challenges posed by mixed pixels and edge preservation. Traditional classification methods struggle to accurately classify these areas, leading to misclassifications and reduced mapping accuracy. The proposed method, called Contextual based Hybrid Classification with Fuzzy C-Mean (FCM), combines the strengths of both pixel-based and object-based classification techniques. It first segments the image into homogeneous regions using an object-based approach, which helps to mitigate the effects of mixed pixels. It then utilizes contextual information from neighboring pixels to refine the classification results, particularly focusing on preserving edges between different land cover types. This two-step process enhances the classification accuracy, especially in complex landscapes where mixed pixels and edge details are prevalent. This research has conducted a comparative analysis between FCM classifiers with Smoothness Prior (SP) and Discontinuity Adaptive Prior (DAP) with varying parameters. FCM classifiers were tested for classifying the classes of Agriculture, Eucalyptus, Water, Baren Land, Sal Forest using the AWiFS, LISS-III and LISS-IV images. The greatest overall accuracy (95.59%) is provided by the DAP (H3) model for m = 2.4, λ = 0.8, γ = 0.8.
ISSN:2511-2104
2511-2112
DOI:10.1007/s41870-024-01959-y