Loading…
Evaluating Psychotherapist Competence: Testing the Generalizability of Clinical Competence Assessments of Graduate Trainees
Health service psychology (HSP) graduate programs are shifting from knowledge- to competency-based assessments of trainees' psychotherapy skills. This study used Generalizability Theory to test the dependability of psychotherapy competence assessments based on video observation of trainees. A 1...
Saved in:
Published in: | Journal of counseling psychology 2022-03, Vol.69 (2), p.222-234 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Health service psychology (HSP) graduate programs are shifting from knowledge- to competency-based assessments of trainees' psychotherapy skills. This study used Generalizability Theory to test the dependability of psychotherapy competence assessments based on video observation of trainees. A 10-item rating form was developed from a collection of forms used by graduate programs (n = 102) in counseling and clinical psychology, and a review of the common factors research literature. This form was then used by 11 licensed psychologists to rate eight graduate trainees while viewing 129, approximately 5-min video clips from their psychotherapy sessions with clients (n = 22) at a graduate program's training clinic. Generalizability analyses were used to forecast how the number of raters and clients, and length of observation time impact the dependability of ratings in various rating designs. Raters were the primary source of error variance in ratings, with rater main effects (leniency bias) and dyadic effects (rater-target interactions) contributing 24% and 7% of variance, respectively. Variance due to segments (video clips) was also substantial, suggesting that therapist performance varies within the same counseling session. Generalizability coefficients (G) were highest for crossed rating designs and reached maximum levels (G > .50) after four raters watched each therapist working with three clients and observed 15 min per dyad. These findings suggest that expert raters show consensus in ratings even without rater training and only limited direct observation. Future research should investigate the validity of competence ratings as predictors of outcome.
Public Significance Statement
Ratings of clinical competence are used to determine adequate progress for trainees in HSP and to document competence for accreditation and licensure bodies. This study examined sources of error in these ratings to provide guidance on improving assessment procedures. For competence assessments based on direct observation, we recommend evaluation by multiple raters for each trainee, and observation times of at least 60 min per trainee. |
---|---|
ISSN: | 0022-0167 1939-2168 |
DOI: | 10.1037/cou0000576 |