Loading…
Fast text categorization using concise semantic analysis
► The contributions of this paper are threefold. ► First, a new methodology to extract concepts from category labels is proposed. It is simple but efficient which is designed specifically for text categorization applications. ► Second, a new weighting method for the calculating of relationship degre...
Saved in:
Published in: | Pattern recognition letters 2011-02, Vol.32 (3), p.441-448 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | ► The contributions of this paper are threefold. ► First, a new methodology to extract concepts from category labels is proposed. It is simple but efficient which is designed specifically for text categorization applications. ► Second, a new weighting method for the calculating of relationship degree between words and concepts is proposed. The new method takes the lengths of the documents into consideration and gives higher weights to appearances of words in short documents. ► Finally, the proposed approach is evaluated on three different corpora with two commonly used learning algorithms. The experimental results and analysis may provide useful information for future research on this topic.
Text representation is a necessary procedure for text categorization tasks. Currently, bag of words (BOW) is the most widely used text representation method but it suffers from two drawbacks. First, the quantity of words is huge; second, it is not feasible to calculate the relationship between words. Semantic analysis (SA) techniques help BOW overcome these two drawbacks by interpreting words and documents in a space of concepts. However, existing SA techniques are not designed for text categorization and often incur huge computing cost. This paper proposes a concise semantic analysis (CSA) technique for text categorization tasks. CSA extracts a few concepts from category labels and then implements concise interpretation on words and documents. These concepts are small in quantity and great in generality and tightly related to the category labels. Therefore, CSA preserves necessary information for classifiers with very low computing cost. To evaluate CSA, experiments on three data sets (Reuters-21578, 20-NewsGroup and Tancorp) were conducted and the results show that CSA reaches a comparable
micro- and
macro-
F
1 performance with BOW, if not better one. Experiments also show that CSA helps dimension sensitive learning algorithms such as
k-nearest neighbor (
kNN) to eliminate the “Curse of Dimensionality” and as a result reaches a comparable performance with support vector machine (SVM) in text categorization applications. In addition, CSA is language independent and performs equally well both in Chinese and English. |
---|---|
ISSN: | 0167-8655 1872-7344 |
DOI: | 10.1016/j.patrec.2010.11.001 |