Loading…

Interpretable Measures of Conceptual Similarity by Complexity-Constrained Descriptive Auto-Encoding

Quantifying the degree of similarity between images is a key copyright issue for image-based machine learning. In legal doctrine however, determining the degree of similarity between works requires subjective analysis, and fact-finders (judges and juries) can demonstrate considerable variabil-ity in...

Full description

Saved in:
Bibliographic Details
Main Authors: Achille, Alessandro, Steeg, Greg Ver, Liu, Tian Yu, Trager, Matthew, Klingenberg, Carson, Soatto, Stefano
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Quantifying the degree of similarity between images is a key copyright issue for image-based machine learning. In legal doctrine however, determining the degree of similarity between works requires subjective analysis, and fact-finders (judges and juries) can demonstrate considerable variabil-ity in these subjective judgement calls. Images that are structurally similar can be deemed dissimilar, whereas images of completely different scenes can be deemed similar enough to support a claim of copying. We seek to define and compute a notion of 'conceptual similarity' among images that captures high-level relations even among images that do not share repeated elements or visually similar components. The idea is to use a base multi-modal model to gen-erate 'explanations' (captions) of visual data at increasing levels of complexity. Then, similarity can be measured by the length of the caption needed to discriminate between the two images: Two highly dissimilar images can be dis-criminated early in their description, whereas conceptually dissimilar ones will need more detail to be distinguished. We operationalize this definition and show that it correlates with subjective (averaged human evaluation) assessment, and beats existing baselines on both image-to-image and text-to-text similarity benchmarks. Beyond just providing a number, our method also offers interpretability by pointing to the specific level of granularity of the description where the source data are differentiated.
ISSN:2575-7075
DOI:10.1109/CVPR52733.2024.01052