Loading…
Development and Validation of a Trauma-Informed Care Communication Skills Assessment Tool
Trauma-informed care (TIC) is growing in medical education as health care systems recognize trauma’s impact on health outcomes. TIC acknowledges and responds to the effects of trauma on physical, psychological, and emotional health. As TIC trainings are developed and delivered to health care profess...
Saved in:
Published in: | Academic pediatrics 2024-11, Vol.24 (8), p.1333-1342 |
---|---|
Main Authors: | , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Trauma-informed care (TIC) is growing in medical education as health care systems recognize trauma’s impact on health outcomes. TIC acknowledges and responds to the effects of trauma on physical, psychological, and emotional health. As TIC trainings are developed and delivered to health care professionals across the learner continuum, curricula need evaluation beyond learner satisfaction and knowledge to better assess changes in skills. We developed the Gap Kalamazoo Communication Skills Assessment Form for Trauma-Informed Care (GKCSAF-TIC) to evaluate pediatric trainees' communication skills in TIC. We describe the development and validity evidence of the GKCSAF-TIC in assessing pediatric residents' TIC skills during standardized patient encounters.
We developed and implemented the TIC communication skills assessment tool in a one-year prospective cohort study involving pediatric residents. We conducted simulated patient encounters conducted before and after TIC training, with two pediatric faculty attendings assessing each encounter. We gathered validity evidence using Messick’s framework, focusing on content, response process, internal structure, and relationship with other variables.
We analyzed 57 standardized patient encounters with 33 pediatric interns, including 23 pre-post matched pairs. The development process and rater training supported content and response process validity. Internal consistency, measured by Cronbach’s alpha, ranged from 0.93 to 0.96, while inter-rater reliability, measured by intraclass correlations, ranged from 0.80 to 0.83. There was a significant improvement in scores from pre-training to post-training (3.7/5 to 4.05/5; P |
---|---|
ISSN: | 1876-2859 1876-2867 1876-2867 |
DOI: | 10.1016/j.acap.2024.07.008 |