Loading…

A Generative AI-Based Assistant to Evaluate Short and Long Answer Questions

Assessment of long and short answers is a tedious task. The evaluation procedure is usually subjective, resulting in inaccuracies and substantial grading discrepancies. Generative AI-based tools have the potential to significantly lessen the burden on teachers and expedite the evaluation process. Ac...

Full description

Saved in:
Bibliographic Details
Published in:SN computer science 2024-06, Vol.5 (5), p.633
Main Authors: Gaikwad, Harsha R., Kiwelekar, Arvind W.
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Assessment of long and short answers is a tedious task. The evaluation procedure is usually subjective, resulting in inaccuracies and substantial grading discrepancies. Generative AI-based tools have the potential to significantly lessen the burden on teachers and expedite the evaluation process. Accurate semantic representations of data are one of the challenges while developing generative AI-based tools. This paper proposes a model for automatically evaluating the long-short answer that relies on generative AI-based text embedding and semantic similarity. The answers are graded by measuring the cosine similarity between model answers and students’ responses. The model is evaluated against accuracy and root mean square error (RMSE). The proposed model is flexible enough for fine-tuning with other course-specific data sets.
ISSN:2662-995X
2661-8907
DOI:10.1007/s42979-024-02965-4