Loading…
A checklist-based approach to assess the systematicity of the abstracts of reviews self-identifying as systematic reviews
Systematic reviews are crucial for various stakeholders since they allow them to make evidence-based decisions without being overwhelmed by a large volume of research. Systematic reviews are increasingly popular in the software engineering field. The abstract is one of the most important systematic...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Systematic reviews are crucial for various stakeholders since they allow them to make evidence-based decisions without being overwhelmed by a large volume of research. Systematic reviews are increasingly popular in the software engineering field. The abstract is one of the most important systematic review's components since it usually reflects the content of the review. It may be the only part of the review that most of the readers will read when needing to form an opinion on a given topic. Besides, the content of an abstract is usually the main information readers use to decide if they want to access the full content of the review or not. Since an abstract usually summarizes a review, readers may therefore mostly rely on that abstract to judge the quality of the review as well as its methodological rigor. However, abstracts are sometimes poorly written and may therefore give a misleading and even harmful picture of the reviews' contents. To assess abstracts, we propose a measure that allows quantifying the systematicity of reviews' abstracts i.e., the extent to which these abstracts exhibit good reporting quality. Experiments on 151 reviews published in the software engineering (SE) field showed that these reviews' abstracts exhibit a suboptimal systematicity. |
---|---|
ISSN: | 2640-0715 |
DOI: | 10.1109/APSEC57359.2022.00071 |