Loading…

On the evaluation of research software: the CDUR procedure [version 2; peer review: 2 approved]

Background: Evaluation of the quality of research software is a challenging and relevant issue, still not sufficiently addressed by the scientific community. Methods: Our contribution begins by defining, precisely but widely enough, the notions of research software and of its authors followed by a s...

Full description

Saved in:
Bibliographic Details
Published in:F1000 research 2019, Vol.8, p.1353-1353
Main Authors: Gomez-Diaz, Teresa, Recio, Tomas
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Background: Evaluation of the quality of research software is a challenging and relevant issue, still not sufficiently addressed by the scientific community. Methods: Our contribution begins by defining, precisely but widely enough, the notions of research software and of its authors followed by a study of the evaluation issues, as the basis for the proposition of a sound assessment protocol: the CDUR procedure. Results: CDUR comprises four steps introduced as follows: Citation, to deal with correct RS identification, Dissemination, to measure good dissemination practices, Use, devoted to the evaluation of usability aspects, and Research, to assess the impact of the scientific work. Conclusions: Some conclusions and recommendations are finally included. The evaluation of research is the keystone to boost the evolution of the Open Science policies and practices.  It is as well our belief that research software evaluation is a fundamental step to induce better research software practices and, thus, a step towards more efficient science.
ISSN:2046-1402
2046-1402
DOI:10.12688/f1000research.19994.2