Loading…

Cross-cultural validation of the Spanish version of the mini cambridge-exeter repetitive thought scale (Mini-CERTS) in two Spanish-speaking populations

The Mini Cambridge-Exeter Repetitive Thoughts Scale (Mini-CERTS) captures constructive and unconstructive aspects of repetitive thinking, but there is a need to revise and improve it given its novelty. For this reason, we present a validation and factor analysis of the Spanish version of the Mini-CE...

Full description

Saved in:
Bibliographic Details
Published in:Transcultural psychiatry 2024-04, Vol.61 (2), p.142-150
Main Authors: Ros, Laura, Barry, Tom J., López-Honrubia, Rigoberto, Villanueva-Benite, Maritza E., Morcillo, Alberto, Ricarte, Jorge J.
Format: Article
Language:English
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The Mini Cambridge-Exeter Repetitive Thoughts Scale (Mini-CERTS) captures constructive and unconstructive aspects of repetitive thinking, but there is a need to revise and improve it given its novelty. For this reason, we present a validation and factor analysis of the Spanish version of the Mini-CERTS. Given that it is important to take cultural issues into account in instrument adaptation, we also assess its measurement invariance across Spanish (N = 430) and Peruvian (N = 394) populations. After deleting conflictive items, a 9-items version of the Mini-CERTS showed a two-factor model distinguishing constructive and unconstructive repetitive thinking, although this solution was not invariant across groups. Results also showed that the unconstructive factor was positively associated with anxiety, depression and stress measures. Despite its acceptable internal consistency, the absence of measurement invariance across groups does not recommend its use in cross-group comparisons in these populations. Cultural issues that could explain this result are discussed. Our findings highlight the importance of performing cross-cultural adaptations of assessment instruments even with the same language.
ISSN:1363-4615
1461-7471
DOI:10.1177/13634615231209143