Loading…
Risk of bias assessment of test comparisons was uncommon in comparative accuracy systematic reviews: an overview of reviews
Comparative diagnostic test accuracy systematic reviews (DTA reviews) assess the accuracy of two or more tests and compare their diagnostic performance. We investigated how comparative DTA reviews assessed the risk of bias (RoB) in primary studies that compared multiple index tests. This is an overv...
Saved in:
Published in: | Journal of clinical epidemiology 2020-11, Vol.127, p.167-174 |
---|---|
Main Authors: | , , , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c444t-6980e10af76382629555047461f85d1540e7741a0c371ce5a5e81b262d4c38223 |
---|---|
cites | cdi_FETCH-LOGICAL-c444t-6980e10af76382629555047461f85d1540e7741a0c371ce5a5e81b262d4c38223 |
container_end_page | 174 |
container_issue | |
container_start_page | 167 |
container_title | Journal of clinical epidemiology |
container_volume | 127 |
creator | Yang, Bada Vali, Yasaman Dehmoobad Sharifabadi, Anahita Harris, Isobel Marion Beese, Sophie Davenport, Clare Hyde, Christopher Takwoingi, Yemisi Whiting, Penny Langendam, Miranda W. Leeflang, Mariska M.G. |
description | Comparative diagnostic test accuracy systematic reviews (DTA reviews) assess the accuracy of two or more tests and compare their diagnostic performance. We investigated how comparative DTA reviews assessed the risk of bias (RoB) in primary studies that compared multiple index tests.
This is an overview of comparative DTA reviews indexed in MEDLINE from January 1st to December 31st, 2017. Two assessors independently identified DTA reviews including at least two index tests and containing at least one statement in which the accuracy of the index tests was compared. Two assessors independently extracted data on the methods used to assess RoB in studies that directly compared the accuracy of multiple index tests.
We included 238 comparative DTA reviews. Only two reviews (0.8%, 95% confidence interval 0.1 to 3.0%) conducted RoB assessment of test comparisons undertaken in primary studies; neither used an RoB tool specifically designed to assess bias in test comparisons.
Assessment of RoB in test comparisons undertaken in primary studies was uncommon in comparative DTA reviews, possibly due to lack of existing guidance on and awareness of potential sources of bias. Based on our findings, guidance on how to assess and incorporate RoB in comparative DTA reviews is needed. |
doi_str_mv | 10.1016/j.jclinepi.2020.08.007 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2434755031</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0895435619310388</els_id><sourcerecordid>2460072463</sourcerecordid><originalsourceid>FETCH-LOGICAL-c444t-6980e10af76382629555047461f85d1540e7741a0c371ce5a5e81b262d4c38223</originalsourceid><addsrcrecordid>eNqFkV-L1DAUxYMo7rj6FZaAL760Jm3SZHxSFv_BgiD6HDLpLaROkzG3nWXwy-8tM-uDL74k5OR3Ttp7GLuRopZCdm_Hegz7mOAQ60Y0oha2FsI8YRtpja30tpFP2UbYra5Uq7sr9gJxFEIaYfRzdtU2ZmuNVBv253vEXzwPfBc9co8IiBOkeZVmwJmHPB18iZgT8ntClkTKlBOP6XLn53gE7kNYig8njiecYSIx8ALHCPf4jvvE8xHKelqDL_pL9mzwe4RXl_2a_fz08cftl-ru2-evtx_uqqCUmqtuawVI4QfTtbbpmq3WWiijOjlY3UutBBijpBehNTKA9hqs3BHYq0CGpr1mb865h5J_L_RTbooYYL_3CfKCrlGtMpTZSkJf_4OOeSmJvo6ojiZMa0tUd6ZCyYgFBncocfLl5KRwaz1udI_1uLUeJ6wjMxlvLvHLboL-r-2xDwLenwGgedCQisMQIQXoY4Ewuz7H_73xAGVypLQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2460072463</pqid></control><display><type>article</type><title>Risk of bias assessment of test comparisons was uncommon in comparative accuracy systematic reviews: an overview of reviews</title><source>ScienceDirect Freedom Collection 2022-2024</source><creator>Yang, Bada ; Vali, Yasaman ; Dehmoobad Sharifabadi, Anahita ; Harris, Isobel Marion ; Beese, Sophie ; Davenport, Clare ; Hyde, Christopher ; Takwoingi, Yemisi ; Whiting, Penny ; Langendam, Miranda W. ; Leeflang, Mariska M.G.</creator><creatorcontrib>Yang, Bada ; Vali, Yasaman ; Dehmoobad Sharifabadi, Anahita ; Harris, Isobel Marion ; Beese, Sophie ; Davenport, Clare ; Hyde, Christopher ; Takwoingi, Yemisi ; Whiting, Penny ; Langendam, Miranda W. ; Leeflang, Mariska M.G.</creatorcontrib><description>Comparative diagnostic test accuracy systematic reviews (DTA reviews) assess the accuracy of two or more tests and compare their diagnostic performance. We investigated how comparative DTA reviews assessed the risk of bias (RoB) in primary studies that compared multiple index tests.
This is an overview of comparative DTA reviews indexed in MEDLINE from January 1st to December 31st, 2017. Two assessors independently identified DTA reviews including at least two index tests and containing at least one statement in which the accuracy of the index tests was compared. Two assessors independently extracted data on the methods used to assess RoB in studies that directly compared the accuracy of multiple index tests.
We included 238 comparative DTA reviews. Only two reviews (0.8%, 95% confidence interval 0.1 to 3.0%) conducted RoB assessment of test comparisons undertaken in primary studies; neither used an RoB tool specifically designed to assess bias in test comparisons.
Assessment of RoB in test comparisons undertaken in primary studies was uncommon in comparative DTA reviews, possibly due to lack of existing guidance on and awareness of potential sources of bias. Based on our findings, guidance on how to assess and incorporate RoB in comparative DTA reviews is needed.</description><identifier>ISSN: 0895-4356</identifier><identifier>EISSN: 1878-5921</identifier><identifier>DOI: 10.1016/j.jclinepi.2020.08.007</identifier><identifier>PMID: 32798714</identifier><language>eng</language><publisher>United States: Elsevier Inc</publisher><subject>Accuracy ; Bias ; Confidence Intervals ; Data Accuracy ; Diagnostic accuracy ; Diagnostic systems ; Diagnostic tests ; Diagnostic Tests, Routine - standards ; Epidemiology ; Humans ; Medical diagnosis ; Meta-analysis ; Patients ; Reviews ; Risk assessment ; Systematic review ; Systematic Reviews as Topic ; Test comparison</subject><ispartof>Journal of clinical epidemiology, 2020-11, Vol.127, p.167-174</ispartof><rights>2020 The Authors</rights><rights>Copyright © 2020 The Authors. Published by Elsevier Inc. All rights reserved.</rights><rights>2020. The Authors</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c444t-6980e10af76382629555047461f85d1540e7741a0c371ce5a5e81b262d4c38223</citedby><cites>FETCH-LOGICAL-c444t-6980e10af76382629555047461f85d1540e7741a0c371ce5a5e81b262d4c38223</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32798714$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Yang, Bada</creatorcontrib><creatorcontrib>Vali, Yasaman</creatorcontrib><creatorcontrib>Dehmoobad Sharifabadi, Anahita</creatorcontrib><creatorcontrib>Harris, Isobel Marion</creatorcontrib><creatorcontrib>Beese, Sophie</creatorcontrib><creatorcontrib>Davenport, Clare</creatorcontrib><creatorcontrib>Hyde, Christopher</creatorcontrib><creatorcontrib>Takwoingi, Yemisi</creatorcontrib><creatorcontrib>Whiting, Penny</creatorcontrib><creatorcontrib>Langendam, Miranda W.</creatorcontrib><creatorcontrib>Leeflang, Mariska M.G.</creatorcontrib><title>Risk of bias assessment of test comparisons was uncommon in comparative accuracy systematic reviews: an overview of reviews</title><title>Journal of clinical epidemiology</title><addtitle>J Clin Epidemiol</addtitle><description>Comparative diagnostic test accuracy systematic reviews (DTA reviews) assess the accuracy of two or more tests and compare their diagnostic performance. We investigated how comparative DTA reviews assessed the risk of bias (RoB) in primary studies that compared multiple index tests.
This is an overview of comparative DTA reviews indexed in MEDLINE from January 1st to December 31st, 2017. Two assessors independently identified DTA reviews including at least two index tests and containing at least one statement in which the accuracy of the index tests was compared. Two assessors independently extracted data on the methods used to assess RoB in studies that directly compared the accuracy of multiple index tests.
We included 238 comparative DTA reviews. Only two reviews (0.8%, 95% confidence interval 0.1 to 3.0%) conducted RoB assessment of test comparisons undertaken in primary studies; neither used an RoB tool specifically designed to assess bias in test comparisons.
Assessment of RoB in test comparisons undertaken in primary studies was uncommon in comparative DTA reviews, possibly due to lack of existing guidance on and awareness of potential sources of bias. Based on our findings, guidance on how to assess and incorporate RoB in comparative DTA reviews is needed.</description><subject>Accuracy</subject><subject>Bias</subject><subject>Confidence Intervals</subject><subject>Data Accuracy</subject><subject>Diagnostic accuracy</subject><subject>Diagnostic systems</subject><subject>Diagnostic tests</subject><subject>Diagnostic Tests, Routine - standards</subject><subject>Epidemiology</subject><subject>Humans</subject><subject>Medical diagnosis</subject><subject>Meta-analysis</subject><subject>Patients</subject><subject>Reviews</subject><subject>Risk assessment</subject><subject>Systematic review</subject><subject>Systematic Reviews as Topic</subject><subject>Test comparison</subject><issn>0895-4356</issn><issn>1878-5921</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNqFkV-L1DAUxYMo7rj6FZaAL760Jm3SZHxSFv_BgiD6HDLpLaROkzG3nWXwy-8tM-uDL74k5OR3Ttp7GLuRopZCdm_Hegz7mOAQ60Y0oha2FsI8YRtpja30tpFP2UbYra5Uq7sr9gJxFEIaYfRzdtU2ZmuNVBv253vEXzwPfBc9co8IiBOkeZVmwJmHPB18iZgT8ntClkTKlBOP6XLn53gE7kNYig8njiecYSIx8ALHCPf4jvvE8xHKelqDL_pL9mzwe4RXl_2a_fz08cftl-ru2-evtx_uqqCUmqtuawVI4QfTtbbpmq3WWiijOjlY3UutBBijpBehNTKA9hqs3BHYq0CGpr1mb865h5J_L_RTbooYYL_3CfKCrlGtMpTZSkJf_4OOeSmJvo6ojiZMa0tUd6ZCyYgFBncocfLl5KRwaz1udI_1uLUeJ6wjMxlvLvHLboL-r-2xDwLenwGgedCQisMQIQXoY4Ewuz7H_73xAGVypLQ</recordid><startdate>202011</startdate><enddate>202011</enddate><creator>Yang, Bada</creator><creator>Vali, Yasaman</creator><creator>Dehmoobad Sharifabadi, Anahita</creator><creator>Harris, Isobel Marion</creator><creator>Beese, Sophie</creator><creator>Davenport, Clare</creator><creator>Hyde, Christopher</creator><creator>Takwoingi, Yemisi</creator><creator>Whiting, Penny</creator><creator>Langendam, Miranda W.</creator><creator>Leeflang, Mariska M.G.</creator><general>Elsevier Inc</general><general>Elsevier Limited</general><scope>6I.</scope><scope>AAFTH</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7QL</scope><scope>7QP</scope><scope>7RV</scope><scope>7T2</scope><scope>7T7</scope><scope>7TK</scope><scope>7U7</scope><scope>7U9</scope><scope>7X7</scope><scope>7XB</scope><scope>88C</scope><scope>88E</scope><scope>8AO</scope><scope>8C1</scope><scope>8FD</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AEUYN</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>C1K</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>H94</scope><scope>K9.</scope><scope>KB0</scope><scope>M0S</scope><scope>M0T</scope><scope>M1P</scope><scope>M2O</scope><scope>M7N</scope><scope>MBDVC</scope><scope>NAPCQ</scope><scope>P64</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><scope>7X8</scope></search><sort><creationdate>202011</creationdate><title>Risk of bias assessment of test comparisons was uncommon in comparative accuracy systematic reviews: an overview of reviews</title><author>Yang, Bada ; Vali, Yasaman ; Dehmoobad Sharifabadi, Anahita ; Harris, Isobel Marion ; Beese, Sophie ; Davenport, Clare ; Hyde, Christopher ; Takwoingi, Yemisi ; Whiting, Penny ; Langendam, Miranda W. ; Leeflang, Mariska M.G.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c444t-6980e10af76382629555047461f85d1540e7741a0c371ce5a5e81b262d4c38223</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Accuracy</topic><topic>Bias</topic><topic>Confidence Intervals</topic><topic>Data Accuracy</topic><topic>Diagnostic accuracy</topic><topic>Diagnostic systems</topic><topic>Diagnostic tests</topic><topic>Diagnostic Tests, Routine - standards</topic><topic>Epidemiology</topic><topic>Humans</topic><topic>Medical diagnosis</topic><topic>Meta-analysis</topic><topic>Patients</topic><topic>Reviews</topic><topic>Risk assessment</topic><topic>Systematic review</topic><topic>Systematic Reviews as Topic</topic><topic>Test comparison</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yang, Bada</creatorcontrib><creatorcontrib>Vali, Yasaman</creatorcontrib><creatorcontrib>Dehmoobad Sharifabadi, Anahita</creatorcontrib><creatorcontrib>Harris, Isobel Marion</creatorcontrib><creatorcontrib>Beese, Sophie</creatorcontrib><creatorcontrib>Davenport, Clare</creatorcontrib><creatorcontrib>Hyde, Christopher</creatorcontrib><creatorcontrib>Takwoingi, Yemisi</creatorcontrib><creatorcontrib>Whiting, Penny</creatorcontrib><creatorcontrib>Langendam, Miranda W.</creatorcontrib><creatorcontrib>Leeflang, Mariska M.G.</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Calcium & Calcified Tissue Abstracts</collection><collection>Nursing & Allied Health Database</collection><collection>Health and Safety Science Abstracts (Full archive)</collection><collection>Industrial and Applied Microbiology Abstracts (Microbiology A)</collection><collection>Neurosciences Abstracts</collection><collection>Toxicology Abstracts</collection><collection>Virology and AIDS Abstracts</collection><collection>ProQuest_Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Healthcare Administration Database (Alumni)</collection><collection>Medical Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Public Health Database (ProQuest Medical & Health Databases)</collection><collection>Technology Research Database</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest One Sustainability</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Nursing & Allied Health Database (Alumni Edition)</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>ProQuest Healthcare Administration Database</collection><collection>PML(ProQuest Medical Library)</collection><collection>ProQuest_Research Library</collection><collection>Algology Mycology and Protozoology Abstracts (Microbiology C)</collection><collection>Research Library (Corporate)</collection><collection>Nursing & Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of clinical epidemiology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yang, Bada</au><au>Vali, Yasaman</au><au>Dehmoobad Sharifabadi, Anahita</au><au>Harris, Isobel Marion</au><au>Beese, Sophie</au><au>Davenport, Clare</au><au>Hyde, Christopher</au><au>Takwoingi, Yemisi</au><au>Whiting, Penny</au><au>Langendam, Miranda W.</au><au>Leeflang, Mariska M.G.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Risk of bias assessment of test comparisons was uncommon in comparative accuracy systematic reviews: an overview of reviews</atitle><jtitle>Journal of clinical epidemiology</jtitle><addtitle>J Clin Epidemiol</addtitle><date>2020-11</date><risdate>2020</risdate><volume>127</volume><spage>167</spage><epage>174</epage><pages>167-174</pages><issn>0895-4356</issn><eissn>1878-5921</eissn><abstract>Comparative diagnostic test accuracy systematic reviews (DTA reviews) assess the accuracy of two or more tests and compare their diagnostic performance. We investigated how comparative DTA reviews assessed the risk of bias (RoB) in primary studies that compared multiple index tests.
This is an overview of comparative DTA reviews indexed in MEDLINE from January 1st to December 31st, 2017. Two assessors independently identified DTA reviews including at least two index tests and containing at least one statement in which the accuracy of the index tests was compared. Two assessors independently extracted data on the methods used to assess RoB in studies that directly compared the accuracy of multiple index tests.
We included 238 comparative DTA reviews. Only two reviews (0.8%, 95% confidence interval 0.1 to 3.0%) conducted RoB assessment of test comparisons undertaken in primary studies; neither used an RoB tool specifically designed to assess bias in test comparisons.
Assessment of RoB in test comparisons undertaken in primary studies was uncommon in comparative DTA reviews, possibly due to lack of existing guidance on and awareness of potential sources of bias. Based on our findings, guidance on how to assess and incorporate RoB in comparative DTA reviews is needed.</abstract><cop>United States</cop><pub>Elsevier Inc</pub><pmid>32798714</pmid><doi>10.1016/j.jclinepi.2020.08.007</doi><tpages>8</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0895-4356 |
ispartof | Journal of clinical epidemiology, 2020-11, Vol.127, p.167-174 |
issn | 0895-4356 1878-5921 |
language | eng |
recordid | cdi_proquest_miscellaneous_2434755031 |
source | ScienceDirect Freedom Collection 2022-2024 |
subjects | Accuracy Bias Confidence Intervals Data Accuracy Diagnostic accuracy Diagnostic systems Diagnostic tests Diagnostic Tests, Routine - standards Epidemiology Humans Medical diagnosis Meta-analysis Patients Reviews Risk assessment Systematic review Systematic Reviews as Topic Test comparison |
title | Risk of bias assessment of test comparisons was uncommon in comparative accuracy systematic reviews: an overview of reviews |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T10%3A24%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Risk%20of%20bias%20assessment%20of%20test%20comparisons%20was%20uncommon%20in%20comparative%20accuracy%20systematic%20reviews:%20an%20overview%20of%20reviews&rft.jtitle=Journal%20of%20clinical%20epidemiology&rft.au=Yang,%20Bada&rft.date=2020-11&rft.volume=127&rft.spage=167&rft.epage=174&rft.pages=167-174&rft.issn=0895-4356&rft.eissn=1878-5921&rft_id=info:doi/10.1016/j.jclinepi.2020.08.007&rft_dat=%3Cproquest_cross%3E2460072463%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c444t-6980e10af76382629555047461f85d1540e7741a0c371ce5a5e81b262d4c38223%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2460072463&rft_id=info:pmid/32798714&rfr_iscdi=true |