Loading…

Setting Doesn't Matter Much: A Meta-Analytic Comparison of the Results of Intelligence Tests Obtained in Group and Individual Settings

This study deals with the effects of the diagnostic setting on the performance in intelligence tests. We conducted a meta-analysis in which k = 30 samples with a total sample size of N = 2,448 were integrated. We compared results for the same intelligence tests administered either in a group or in a...

Full description

Saved in:
Bibliographic Details
Published in:European journal of psychological assessment : official organ of the European Association of Psychological Assessment 2019-05, Vol.35 (3), p.309-316
Main Authors: Becker, Nicolas, Koch, Marco, Schult, Johannes, Spinath, Frank M
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study deals with the effects of the diagnostic setting on the performance in intelligence tests. We conducted a meta-analysis in which k = 30 samples with a total sample size of N = 2,448 were integrated. We compared results for the same intelligence tests administered either in a group or in an individual setting. The main analysis indicated a small mean population effect [M(g) = 0.085] that was not significant [−0.036 ≤ M(g) ≤ 0.206]. Nevertheless, moderator analyses indicated a stronger [M(g) = 0.193] and significant [0.087 ≤ M(g) ≤ 0.298] effect in favor of individual settings for studies employing a between-person design. Setting effects in within-person designs were most likely superimposed by retest effects. As the setting effect was very small, the current testing practice in which results obtained in group and individual settings are treated as interchangeable is not overly problematic. However, our results encourage test developers to examine setting effects before stating that results obtained in different settings are equivalent. Between-person designs using participants of comparable ability are most suitable in this context as retest effects can be ruled out.
ISSN:1015-5759
2151-2426
DOI:10.1027/1015-5759/a000402