Loading…
Integrating ILSR to Bag-of-Visual Words Model Based on Sparse Codes of SIFT Features Representations
In computer vision, the bag-of-visual words(BOV) approach has been shown to yield state-of-the-art results. To improve BOV model, we use sparse codes of SIFT features instead of previous vector quantization (VQ) such as k-means, due to more quantization errors of VQ. And as local features in most ca...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In computer vision, the bag-of-visual words(BOV) approach has been shown to yield state-of-the-art results. To improve BOV model, we use sparse codes of SIFT features instead of previous vector quantization (VQ) such as k-means, due to more quantization errors of VQ. And as local features in most categories have spatial dependence in real world, we use neighbor features of one local feature as its implicit local spatial relationship (ILSR). This paper proposes an object categorization algorithm which integrate implicit local spatial relationship with its appearance features based on sparse codes of SIFT to form two sources of information for categorization. The algorithm is applied in Caltech-101 and Caltech-256 datasets to validate its effectiveness. The experimental results show its good performance. |
---|---|
ISSN: | 1051-4651 2831-7475 |
DOI: | 10.1109/ICPR.2010.1041 |