Loading…

A standard framework for gamification evaluation in education and training of software engineering: an evaluation from a proof of concept

This Research to Practice Full Paper presents that the gamification has been used to motivate and engage participants in software engineering education and training. The effects of gamification in this area have been studied and reported since 2011. However, there are no studies that propose standar...

Full description

Saved in:
Bibliographic Details
Main Authors: Monteiro, Rodrigo Henrique Barbosa, Oliveira, Sandro Ronaldo Bezerra, De Almeida Souza, Mauricio Ronny
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This Research to Practice Full Paper presents that the gamification has been used to motivate and engage participants in software engineering education and training. The effects of gamification in this area have been studied and reported since 2011. However, there are no studies that propose standard procedures for evaluating gamification in the specific context of software engineering education and training. As a result, each study proposes their own evaluation procedures, making it difficult to compare approaches. The standardization of criteria, measures and indicators can allow an objective comparison between primary studies on gamification in software engineering, reinforcing results and revealing trends. Thus, the objective of this study is to propose and evaluate a framework for the evaluation of gamification in the context of software engineering education and training. The proposed framework focus on the structural aspect of an evaluation, consisting of concepts (entities) and their relationships. The development of this framework consisted of three steps: (1) the definition of the framework structure, based on the results of a systematic review of the literature on evaluation of gamification in software engineering education and practice, and its adequation to the GQIM (Goal-Question-Indicator-Metric) model; (2) an ad hoc review of this framework by three researchers; and (3) the execution of a PoC (Proof of Concept) evaluation, in order to perform an preliminar assessment of the framework adequacy. As a result, we were able to use the framework to model the evaluation of two primary studies, documenting the items found in these studies, which revealed the existence of a common measure between the two studies (Total lines of code, or LOC). The existence of only one common measure in both evaluations makes it difficult to compare the results of the studies. Additionally, we observed that one of the studies did not present information to justify the choice of the measures used in its evaluation and did not summarize all the data collected in its evaluation procedures. Therefore, the use of the framework might be useful to identify points of improvement for the validity of evaluation studies.
ISSN:2377-634X
DOI:10.1109/FIE49875.2021.9637232