Loading…

The effect of voice recognition software on comparative error rates in radiology reports

This study sought to confirm whether reports generated in a department of radiology contain more errors if generated using voice recognition (VR) software than if traditional dictation-transcription (DT) is used. All radiology reports generated over a 1-week period in a British teaching hospital wer...

Full description

Saved in:
Bibliographic Details
Published in:British journal of radiology 2008-10, Vol.81 (970), p.767-770
Main Authors: MCGURK, S, BRAUER, K, MACFARLANE, T. V, DUNCAN, K. A
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c418t-c44a840df4176ee6d8b175671c1e3cd8a1ff9676b5197d094ec69e60181c09143
cites cdi_FETCH-LOGICAL-c418t-c44a840df4176ee6d8b175671c1e3cd8a1ff9676b5197d094ec69e60181c09143
container_end_page 770
container_issue 970
container_start_page 767
container_title British journal of radiology
container_volume 81
creator MCGURK, S
BRAUER, K
MACFARLANE, T. V
DUNCAN, K. A
description This study sought to confirm whether reports generated in a department of radiology contain more errors if generated using voice recognition (VR) software than if traditional dictation-transcription (DT) is used. All radiology reports generated over a 1-week period in a British teaching hospital were assessed. The presence of errors and their impact on the report were assessed. Data collected included the type of report, site of dictation, the experience of the operator, and whether English was the first language of the operator. 1887 reports were reviewed. 1160 (61.5%) were dictated using VR and 727 reports (38.5%) were generated by DT. 71 errors (3.8% of all reports) were identified. 56 errors were made using VR (4.8% of VR reports), whereas 15 errors were identified in DT reports (2.1% of transcribed reports). The difference in report errors between these two dictation methods was statistically significant (p = 0.002). Of the 71 reports containing errors, 37 (52.1%) had errors that affecting understanding. Other factors were also identified that significantly increased the likelihood of errors in a VR-generated report, such as working in a busy inpatient environment (p
doi_str_mv 10.1259/bjr/20698753
format article
fullrecord <record><control><sourceid>pubmed_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1259_bjr_20698753</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>18628322</sourcerecordid><originalsourceid>FETCH-LOGICAL-c418t-c44a840df4176ee6d8b175671c1e3cd8a1ff9676b5197d094ec69e60181c09143</originalsourceid><addsrcrecordid>eNpFkE1LAzEQhoMotlZvniUXb67N7Ec-jlL8goKXCr0t2eykprSbJVkr_femWPUyMy887xweQq6B3UNeqWmzDtOccSVFVZyQMYhSZlKy5SkZM8ZEBrmsRuQixvUhVoqdkxFInssiz8dkufhAitaiGai3dOedQRrQ-FXnBuc7Gr0dvnRAmm7jt70OenC71AnBB5oCRuq6dLTOb_xqn8q9D0O8JGdWbyJeHfeEvD89LmYv2fzt-XX2MM9MCXJIs9SyZK0tQXBE3soGRMUFGMDCtFKDtYoL3lSgRMtUiYYr5AwkGKagLCbk7uevCT7GgLbug9vqsK-B1QdBdRJU_wpK-M0P3n82W2z_4aORBNweAR2N3tigO-PiH5czAbKSUHwDTJxu9g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>The effect of voice recognition software on comparative error rates in radiology reports</title><source>Oxford Journals Online</source><creator>MCGURK, S ; BRAUER, K ; MACFARLANE, T. V ; DUNCAN, K. A</creator><creatorcontrib>MCGURK, S ; BRAUER, K ; MACFARLANE, T. V ; DUNCAN, K. A</creatorcontrib><description>This study sought to confirm whether reports generated in a department of radiology contain more errors if generated using voice recognition (VR) software than if traditional dictation-transcription (DT) is used. All radiology reports generated over a 1-week period in a British teaching hospital were assessed. The presence of errors and their impact on the report were assessed. Data collected included the type of report, site of dictation, the experience of the operator, and whether English was the first language of the operator. 1887 reports were reviewed. 1160 (61.5%) were dictated using VR and 727 reports (38.5%) were generated by DT. 71 errors (3.8% of all reports) were identified. 56 errors were made using VR (4.8% of VR reports), whereas 15 errors were identified in DT reports (2.1% of transcribed reports). The difference in report errors between these two dictation methods was statistically significant (p = 0.002). Of the 71 reports containing errors, 37 (52.1%) had errors that affecting understanding. Other factors were also identified that significantly increased the likelihood of errors in a VR-generated report, such as working in a busy inpatient environment (p&lt;0.001) and having a language other than English as a first language (p = 0.034). Operator grade was not significantly associated with increased errors. In conclusion, using VR significantly increases the number of reports containing errors. Errors using VR are significantly more likely to occur in noisy areas with a high workload and are more likely to be made by radiologists for whom English is not their first language.</description><identifier>ISSN: 0007-1285</identifier><identifier>EISSN: 1748-880X</identifier><identifier>DOI: 10.1259/bjr/20698753</identifier><identifier>PMID: 18628322</identifier><identifier>CODEN: BJRAAP</identifier><language>eng</language><publisher>London: British Institute of Radiology</publisher><subject>Biological and medical sciences ; Clinical Competence - standards ; Humans ; Investigative techniques, diagnostic techniques (general aspects) ; Language ; Medical Records Systems, Computerized - standards ; Medical sciences ; Noise, Occupational - adverse effects ; Programming Languages ; Radiology Department, Hospital ; Radiology Information Systems - standards ; Speech Recognition Software - standards</subject><ispartof>British journal of radiology, 2008-10, Vol.81 (970), p.767-770</ispartof><rights>2008 INIST-CNRS</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c418t-c44a840df4176ee6d8b175671c1e3cd8a1ff9676b5197d094ec69e60181c09143</citedby><cites>FETCH-LOGICAL-c418t-c44a840df4176ee6d8b175671c1e3cd8a1ff9676b5197d094ec69e60181c09143</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=20718581$$DView record in Pascal Francis$$Hfree_for_read</backlink><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/18628322$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>MCGURK, S</creatorcontrib><creatorcontrib>BRAUER, K</creatorcontrib><creatorcontrib>MACFARLANE, T. V</creatorcontrib><creatorcontrib>DUNCAN, K. A</creatorcontrib><title>The effect of voice recognition software on comparative error rates in radiology reports</title><title>British journal of radiology</title><addtitle>Br J Radiol</addtitle><description>This study sought to confirm whether reports generated in a department of radiology contain more errors if generated using voice recognition (VR) software than if traditional dictation-transcription (DT) is used. All radiology reports generated over a 1-week period in a British teaching hospital were assessed. The presence of errors and their impact on the report were assessed. Data collected included the type of report, site of dictation, the experience of the operator, and whether English was the first language of the operator. 1887 reports were reviewed. 1160 (61.5%) were dictated using VR and 727 reports (38.5%) were generated by DT. 71 errors (3.8% of all reports) were identified. 56 errors were made using VR (4.8% of VR reports), whereas 15 errors were identified in DT reports (2.1% of transcribed reports). The difference in report errors between these two dictation methods was statistically significant (p = 0.002). Of the 71 reports containing errors, 37 (52.1%) had errors that affecting understanding. Other factors were also identified that significantly increased the likelihood of errors in a VR-generated report, such as working in a busy inpatient environment (p&lt;0.001) and having a language other than English as a first language (p = 0.034). Operator grade was not significantly associated with increased errors. In conclusion, using VR significantly increases the number of reports containing errors. Errors using VR are significantly more likely to occur in noisy areas with a high workload and are more likely to be made by radiologists for whom English is not their first language.</description><subject>Biological and medical sciences</subject><subject>Clinical Competence - standards</subject><subject>Humans</subject><subject>Investigative techniques, diagnostic techniques (general aspects)</subject><subject>Language</subject><subject>Medical Records Systems, Computerized - standards</subject><subject>Medical sciences</subject><subject>Noise, Occupational - adverse effects</subject><subject>Programming Languages</subject><subject>Radiology Department, Hospital</subject><subject>Radiology Information Systems - standards</subject><subject>Speech Recognition Software - standards</subject><issn>0007-1285</issn><issn>1748-880X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2008</creationdate><recordtype>article</recordtype><recordid>eNpFkE1LAzEQhoMotlZvniUXb67N7Ec-jlL8goKXCr0t2eykprSbJVkr_femWPUyMy887xweQq6B3UNeqWmzDtOccSVFVZyQMYhSZlKy5SkZM8ZEBrmsRuQixvUhVoqdkxFInssiz8dkufhAitaiGai3dOedQRrQ-FXnBuc7Gr0dvnRAmm7jt70OenC71AnBB5oCRuq6dLTOb_xqn8q9D0O8JGdWbyJeHfeEvD89LmYv2fzt-XX2MM9MCXJIs9SyZK0tQXBE3soGRMUFGMDCtFKDtYoL3lSgRMtUiYYr5AwkGKagLCbk7uevCT7GgLbug9vqsK-B1QdBdRJU_wpK-M0P3n82W2z_4aORBNweAR2N3tigO-PiH5czAbKSUHwDTJxu9g</recordid><startdate>20081001</startdate><enddate>20081001</enddate><creator>MCGURK, S</creator><creator>BRAUER, K</creator><creator>MACFARLANE, T. V</creator><creator>DUNCAN, K. A</creator><general>British Institute of Radiology</general><scope>IQODW</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20081001</creationdate><title>The effect of voice recognition software on comparative error rates in radiology reports</title><author>MCGURK, S ; BRAUER, K ; MACFARLANE, T. V ; DUNCAN, K. A</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c418t-c44a840df4176ee6d8b175671c1e3cd8a1ff9676b5197d094ec69e60181c09143</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2008</creationdate><topic>Biological and medical sciences</topic><topic>Clinical Competence - standards</topic><topic>Humans</topic><topic>Investigative techniques, diagnostic techniques (general aspects)</topic><topic>Language</topic><topic>Medical Records Systems, Computerized - standards</topic><topic>Medical sciences</topic><topic>Noise, Occupational - adverse effects</topic><topic>Programming Languages</topic><topic>Radiology Department, Hospital</topic><topic>Radiology Information Systems - standards</topic><topic>Speech Recognition Software - standards</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>MCGURK, S</creatorcontrib><creatorcontrib>BRAUER, K</creatorcontrib><creatorcontrib>MACFARLANE, T. V</creatorcontrib><creatorcontrib>DUNCAN, K. A</creatorcontrib><collection>Pascal-Francis</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><jtitle>British journal of radiology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>MCGURK, S</au><au>BRAUER, K</au><au>MACFARLANE, T. V</au><au>DUNCAN, K. A</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The effect of voice recognition software on comparative error rates in radiology reports</atitle><jtitle>British journal of radiology</jtitle><addtitle>Br J Radiol</addtitle><date>2008-10-01</date><risdate>2008</risdate><volume>81</volume><issue>970</issue><spage>767</spage><epage>770</epage><pages>767-770</pages><issn>0007-1285</issn><eissn>1748-880X</eissn><coden>BJRAAP</coden><abstract>This study sought to confirm whether reports generated in a department of radiology contain more errors if generated using voice recognition (VR) software than if traditional dictation-transcription (DT) is used. All radiology reports generated over a 1-week period in a British teaching hospital were assessed. The presence of errors and their impact on the report were assessed. Data collected included the type of report, site of dictation, the experience of the operator, and whether English was the first language of the operator. 1887 reports were reviewed. 1160 (61.5%) were dictated using VR and 727 reports (38.5%) were generated by DT. 71 errors (3.8% of all reports) were identified. 56 errors were made using VR (4.8% of VR reports), whereas 15 errors were identified in DT reports (2.1% of transcribed reports). The difference in report errors between these two dictation methods was statistically significant (p = 0.002). Of the 71 reports containing errors, 37 (52.1%) had errors that affecting understanding. Other factors were also identified that significantly increased the likelihood of errors in a VR-generated report, such as working in a busy inpatient environment (p&lt;0.001) and having a language other than English as a first language (p = 0.034). Operator grade was not significantly associated with increased errors. In conclusion, using VR significantly increases the number of reports containing errors. Errors using VR are significantly more likely to occur in noisy areas with a high workload and are more likely to be made by radiologists for whom English is not their first language.</abstract><cop>London</cop><pub>British Institute of Radiology</pub><pmid>18628322</pmid><doi>10.1259/bjr/20698753</doi><tpages>4</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0007-1285
ispartof British journal of radiology, 2008-10, Vol.81 (970), p.767-770
issn 0007-1285
1748-880X
language eng
recordid cdi_crossref_primary_10_1259_bjr_20698753
source Oxford Journals Online
subjects Biological and medical sciences
Clinical Competence - standards
Humans
Investigative techniques, diagnostic techniques (general aspects)
Language
Medical Records Systems, Computerized - standards
Medical sciences
Noise, Occupational - adverse effects
Programming Languages
Radiology Department, Hospital
Radiology Information Systems - standards
Speech Recognition Software - standards
title The effect of voice recognition software on comparative error rates in radiology reports
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T07%3A28%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-pubmed_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20effect%20of%20voice%20recognition%20software%20on%20comparative%20error%20rates%20in%20radiology%20reports&rft.jtitle=British%20journal%20of%20radiology&rft.au=MCGURK,%20S&rft.date=2008-10-01&rft.volume=81&rft.issue=970&rft.spage=767&rft.epage=770&rft.pages=767-770&rft.issn=0007-1285&rft.eissn=1748-880X&rft.coden=BJRAAP&rft_id=info:doi/10.1259/bjr/20698753&rft_dat=%3Cpubmed_cross%3E18628322%3C/pubmed_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c418t-c44a840df4176ee6d8b175671c1e3cd8a1ff9676b5197d094ec69e60181c09143%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/18628322&rfr_iscdi=true