Loading…

Using scientific abstracts to measure learning outcomes in the biological sciences

Educators must often measure the effectiveness of their instruction. We designed, developed, and preliminarily evaluated a multiple-choice assessment tool that requires students to apply what they have learned to evaluate scientific abstracts. This examination methodology offers the flexibility to b...

Full description

Saved in:
Bibliographic Details
Published in:Journal of microbiology & biology education 2013, Vol.14 (2), p.275-276
Main Authors: Giorno, Rebecca, Wolf, William, Hindmarsh, Patrick L, Yule, Jeffrey V, Shultz, Jeff
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Educators must often measure the effectiveness of their instruction. We designed, developed, and preliminarily evaluated a multiple-choice assessment tool that requires students to apply what they have learned to evaluate scientific abstracts. This examination methodology offers the flexibility to both challenge students in specific subject areas and develop the critical thinking skills upper-level classes and research require. Although students do not create an end product (performance), they must demonstrate proficiency in a specific skill that scientists use on a regular basis: critically evaluating scientific literature via abstract analysis, a direct measure of scientific literacy. Scientific abstracts from peer-reviewed research articles lend themselves to in-class testing, since they are typically 250 words or less in length, and their analysis requires skills beyond rote memorization. To address the effectiveness of particular courses, in five different upper-level courses (Ecology, Genetics, Virology, Pathology, and Microbiology) we performed pre- and postcourse assessments to determine whether students were developing subject area competence and if abstract-based testing was a viable instructional strategy. Assessment should cover all levels in Bloom’s hierarchy, which can be accomplished via multiple-choice questions (2). We hypothesized that by comparing the mean scores of pre- and posttest exams designed to address specific tiers of Bloom’s taxonomy, we could evaluate the effectiveness of a course in preparing students to demonstrate subject area competence. We also sought to develop general guidelines for preparing such tests and methods to identify test- and course-specific problems.
ISSN:1935-7877
1935-7885
DOI:10.1128/jmbe.v14i2.633