Loading…

New Challenges in Point Cloud Visual Quality Assessment: A Systematic Review (Dataset)

This dataset is a collection of annotated information on the scientific papers screened and analyzed for the systematic review of the literature in Point Cloud Visual Quality Assessment.  The data is structured as follows: General information Document title Authors Year of publication Venue (Confere...

Full description

Saved in:
Bibliographic Details
Main Authors: Tious, Amar, Vigier, Toinon, Ricordel, Vincent
Format: Dataset
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This dataset is a collection of annotated information on the scientific papers screened and analyzed for the systematic review of the literature in Point Cloud Visual Quality Assessment.  The data is structured as follows: General information Document title Authors Year of publication Venue (Conference or Journal title) Citations (number) URL/DOI About the content  Content Type: Point clouds (PC), Colored Point clouds (CPC), Meshes, Dynamic Point Clouds (DPC) Content source: Source of the content used in a subjective QA test or the evaluation of one or more QA metrics About metric benchmarks Subjective Ground-truth Data: Dataset(s) Source of the subjective scores used as ground-truth in a QA metric benchmark Assessed Metrics: Types of metrics assessed in a benchmark (JPEG standards, IQM, NR, State-of-the-art, others) Performance Measures: PLCC, SROCC, KRCC, RMSE, OR, others About Objective QA metrics Metric: Name given to the metric introduced in this paper Base: 3D-based or Projection-based Categories: Categories that characterize the approach of the proposed metric (Feature-based, Learning-Based, Perceptual-based, IQM, others)  Reference: Full-Reference (FR), Reduced-Reference (RR) or No-Reference (NR) About Subjective QA experiments Display: Type of display (2D, 3D, AR, MR, VR) and interaction approach (passive, interactive, 3DoF, 6DoF) used in the described experiment. Rendering: Type of rendering used to display the stimuli (Points, Squares, Cubes, Surface) Lab/Remote: The experiment was run in one or more lab environments, or remotely (Lab, Cross-Lab, Remote) Rating: Subjective rating methodology used in the experiment (ACR, DSIS, PWC, others) Dataset: Name of the new subjective dataset if the experiment's results were published. Observers: Number of observers  Distortion type: Types of distortions applied to the stimuli and assessed in the experiment
ISSN:2673-8198
2673-8198
DOI:10.5281/zenodo.13992558