Loading…

Learning More Universal Representations for Transfer-Learning

A representation is supposed universal if it encodes any element of the visual world (e.g., objects, scenes) in any configuration (e.g., scale, context). While not expecting pure universal representations, the goal in the literature is to improve the universality level, starting from a representatio...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on pattern analysis and machine intelligence 2020-09, Vol.42 (9), p.2212-2224
Main Authors: Tamaazousti, Youssef, Le Borgne, Herve, Hudelot, Celine, Seddik, Mohamed-El-Amine, Tamaazousti, Mohamed
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A representation is supposed universal if it encodes any element of the visual world (e.g., objects, scenes) in any configuration (e.g., scale, context). While not expecting pure universal representations, the goal in the literature is to improve the universality level, starting from a representation with a certain level. To improve that universality level, one can diversify the source-task, but it requires many additive annotated data that is costly in terms of manual work and possible expertise. We formalize such a diversification process then propose two methods to improve the universality of CNN representations that limit the need for additive annotated data. The first relies on human categorization knowledge and the second on re-training using fine-tuning. We propose a new aggregating metric to evaluate the universality in a transfer-learning scheme, that addresses more aspects than previous works. Based on it, we show the interest of our methods on 10 target-problems, relating to classification on a variety of visual domains.
ISSN:0162-8828
1939-3539
2160-9292
DOI:10.1109/TPAMI.2019.2913857