Loading…

Risk of bias: a simulation study of power to detect study-level moderator effects in meta-analysis

There are both theoretical and empirical reasons to believe that design and execution factors are associated with bias in controlled trials. Statistically significant moderator effects, such as the effect of trial quality on treatment effect sizes, are rarely detected in individual meta-analyses, an...

Full description

Saved in:
Bibliographic Details
Published in:Systematic reviews 2013-11, Vol.2 (1), p.107-107, Article 107
Main Authors: Hempel, Susanne, Miles, Jeremy N V, Booth, Marika J, Wang, Zhen, Morton, Sally C, Shekelle, Paul G
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-b585t-fc95f82bc1395c1ae74db72f500ee5f85b079c1bccdcd937952fa5590519f7d13
cites cdi_FETCH-LOGICAL-b585t-fc95f82bc1395c1ae74db72f500ee5f85b079c1bccdcd937952fa5590519f7d13
container_end_page 107
container_issue 1
container_start_page 107
container_title Systematic reviews
container_volume 2
creator Hempel, Susanne
Miles, Jeremy N V
Booth, Marika J
Wang, Zhen
Morton, Sally C
Shekelle, Paul G
description There are both theoretical and empirical reasons to believe that design and execution factors are associated with bias in controlled trials. Statistically significant moderator effects, such as the effect of trial quality on treatment effect sizes, are rarely detected in individual meta-analyses, and evidence from meta-epidemiological datasets is inconsistent. The reasons for the disconnect between theory and empirical observation are unclear. The study objective was to explore the power to detect study level moderator effects in meta-analyses. We generated meta-analyses using Monte-Carlo simulations and investigated the effect of number of trials, trial sample size, moderator effect size, heterogeneity, and moderator distribution on power to detect moderator effects. The simulations provide a reference guide for investigators to estimate power when planning meta-regressions. The power to detect moderator effects in meta-analyses, for example, effects of study quality on effect sizes, is largely determined by the degree of residual heterogeneity present in the dataset (noise not explained by the moderator). Larger trial sample sizes increase power only when residual heterogeneity is low. A large number of trials or low residual heterogeneity are necessary to detect effects. When the proportion of the moderator is not equal (for example, 25% 'high quality', 75% 'low quality' trials), power of 80% was rarely achieved in investigated scenarios. Application to an empirical meta-epidemiological dataset with substantial heterogeneity (I(2) = 92%, τ(2) = 0.285) estimated >200 trials are needed for a power of 80% to show a statistically significant result, even for a substantial moderator effect (0.2), and the number of trials with the less common feature (for example, few 'high quality' studies) affects power extensively. Although study characteristics, such as trial quality, may explain some proportion of heterogeneity across study results in meta-analyses, residual heterogeneity is a crucial factor in determining when associations between moderator variables and effect sizes can be statistically detected. Detecting moderator effects requires more powerful analyses than are employed in most published investigations; hence negative findings should not be considered evidence of a lack of effect, and investigations are not hypothesis-proving unless power calculations show sufficient ability to detect effects.
doi_str_mv 10.1186/2046-4053-2-107
format article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_4219184</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1467070143</sourcerecordid><originalsourceid>FETCH-LOGICAL-b585t-fc95f82bc1395c1ae74db72f500ee5f85b079c1bccdcd937952fa5590519f7d13</originalsourceid><addsrcrecordid>eNp1kUtLAzEURoMoWrRrd5Klm7FJZjIPF4IWXyAIouuQZG40OjOpSabSf--U1mJBs0nI-Tg3uRehY0rOKC3zCSNZnmSEpwlLKCl20Ghzs_vrfIDGIbyTYeWcUJLvowOWsTJnpBwh9WTDB3YGKyvDOZY42LZvZLSuwyH29WLJZu4LPI4O1xBBxxVIGphDg1tXg5fReQzGDDBg2-EWokxkJ5tFsOEI7RnZBBiv90P0cnP9PL1LHh5v76eXD4niJY-J0RU3JVOaphXXVEKR1apghhMCMBCuSFFpqrSudV2lRcWZkZxXhNPKFDVND9HFyjvrVQu1hi562YiZt630C-GkFduks2_i1c1FxmhFy2wQXK0Eyrp_BNtEu1Ys2yyWbRZMDEMYJKfrV3j32UOIorVBQ9PIDlwfBM3yghSEZukQnayi2rsQPJhNKUrEcsJ_yE9-f3GT_5ln-g0z4qNT</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1467070143</pqid></control><display><type>article</type><title>Risk of bias: a simulation study of power to detect study-level moderator effects in meta-analysis</title><source>PubMed (Medline)</source><creator>Hempel, Susanne ; Miles, Jeremy N V ; Booth, Marika J ; Wang, Zhen ; Morton, Sally C ; Shekelle, Paul G</creator><creatorcontrib>Hempel, Susanne ; Miles, Jeremy N V ; Booth, Marika J ; Wang, Zhen ; Morton, Sally C ; Shekelle, Paul G</creatorcontrib><description>There are both theoretical and empirical reasons to believe that design and execution factors are associated with bias in controlled trials. Statistically significant moderator effects, such as the effect of trial quality on treatment effect sizes, are rarely detected in individual meta-analyses, and evidence from meta-epidemiological datasets is inconsistent. The reasons for the disconnect between theory and empirical observation are unclear. The study objective was to explore the power to detect study level moderator effects in meta-analyses. We generated meta-analyses using Monte-Carlo simulations and investigated the effect of number of trials, trial sample size, moderator effect size, heterogeneity, and moderator distribution on power to detect moderator effects. The simulations provide a reference guide for investigators to estimate power when planning meta-regressions. The power to detect moderator effects in meta-analyses, for example, effects of study quality on effect sizes, is largely determined by the degree of residual heterogeneity present in the dataset (noise not explained by the moderator). Larger trial sample sizes increase power only when residual heterogeneity is low. A large number of trials or low residual heterogeneity are necessary to detect effects. When the proportion of the moderator is not equal (for example, 25% 'high quality', 75% 'low quality' trials), power of 80% was rarely achieved in investigated scenarios. Application to an empirical meta-epidemiological dataset with substantial heterogeneity (I(2) = 92%, τ(2) = 0.285) estimated &gt;200 trials are needed for a power of 80% to show a statistically significant result, even for a substantial moderator effect (0.2), and the number of trials with the less common feature (for example, few 'high quality' studies) affects power extensively. Although study characteristics, such as trial quality, may explain some proportion of heterogeneity across study results in meta-analyses, residual heterogeneity is a crucial factor in determining when associations between moderator variables and effect sizes can be statistically detected. Detecting moderator effects requires more powerful analyses than are employed in most published investigations; hence negative findings should not be considered evidence of a lack of effect, and investigations are not hypothesis-proving unless power calculations show sufficient ability to detect effects.</description><identifier>ISSN: 2046-4053</identifier><identifier>EISSN: 2046-4053</identifier><identifier>DOI: 10.1186/2046-4053-2-107</identifier><identifier>PMID: 24286208</identifier><language>eng</language><publisher>England: BioMed Central Ltd</publisher><subject>Bias ; Computer Simulation ; Effect Modifier, Epidemiologic ; Humans ; Meta-Analysis as Topic ; Methodology ; Monte Carlo Method ; Randomized Controlled Trials as Topic - standards ; Research Design ; Risk Factors</subject><ispartof>Systematic reviews, 2013-11, Vol.2 (1), p.107-107, Article 107</ispartof><rights>Copyright © 2013 Hempel et al.; licensee BioMed Central Ltd. 2013 Hempel et al.; licensee BioMed Central Ltd.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-b585t-fc95f82bc1395c1ae74db72f500ee5f85b079c1bccdcd937952fa5590519f7d13</citedby><cites>FETCH-LOGICAL-b585t-fc95f82bc1395c1ae74db72f500ee5f85b079c1bccdcd937952fa5590519f7d13</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC4219184/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC4219184/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,27924,27925,53791,53793</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/24286208$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Hempel, Susanne</creatorcontrib><creatorcontrib>Miles, Jeremy N V</creatorcontrib><creatorcontrib>Booth, Marika J</creatorcontrib><creatorcontrib>Wang, Zhen</creatorcontrib><creatorcontrib>Morton, Sally C</creatorcontrib><creatorcontrib>Shekelle, Paul G</creatorcontrib><title>Risk of bias: a simulation study of power to detect study-level moderator effects in meta-analysis</title><title>Systematic reviews</title><addtitle>Syst Rev</addtitle><description>There are both theoretical and empirical reasons to believe that design and execution factors are associated with bias in controlled trials. Statistically significant moderator effects, such as the effect of trial quality on treatment effect sizes, are rarely detected in individual meta-analyses, and evidence from meta-epidemiological datasets is inconsistent. The reasons for the disconnect between theory and empirical observation are unclear. The study objective was to explore the power to detect study level moderator effects in meta-analyses. We generated meta-analyses using Monte-Carlo simulations and investigated the effect of number of trials, trial sample size, moderator effect size, heterogeneity, and moderator distribution on power to detect moderator effects. The simulations provide a reference guide for investigators to estimate power when planning meta-regressions. The power to detect moderator effects in meta-analyses, for example, effects of study quality on effect sizes, is largely determined by the degree of residual heterogeneity present in the dataset (noise not explained by the moderator). Larger trial sample sizes increase power only when residual heterogeneity is low. A large number of trials or low residual heterogeneity are necessary to detect effects. When the proportion of the moderator is not equal (for example, 25% 'high quality', 75% 'low quality' trials), power of 80% was rarely achieved in investigated scenarios. Application to an empirical meta-epidemiological dataset with substantial heterogeneity (I(2) = 92%, τ(2) = 0.285) estimated &gt;200 trials are needed for a power of 80% to show a statistically significant result, even for a substantial moderator effect (0.2), and the number of trials with the less common feature (for example, few 'high quality' studies) affects power extensively. Although study characteristics, such as trial quality, may explain some proportion of heterogeneity across study results in meta-analyses, residual heterogeneity is a crucial factor in determining when associations between moderator variables and effect sizes can be statistically detected. Detecting moderator effects requires more powerful analyses than are employed in most published investigations; hence negative findings should not be considered evidence of a lack of effect, and investigations are not hypothesis-proving unless power calculations show sufficient ability to detect effects.</description><subject>Bias</subject><subject>Computer Simulation</subject><subject>Effect Modifier, Epidemiologic</subject><subject>Humans</subject><subject>Meta-Analysis as Topic</subject><subject>Methodology</subject><subject>Monte Carlo Method</subject><subject>Randomized Controlled Trials as Topic - standards</subject><subject>Research Design</subject><subject>Risk Factors</subject><issn>2046-4053</issn><issn>2046-4053</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2013</creationdate><recordtype>article</recordtype><recordid>eNp1kUtLAzEURoMoWrRrd5Klm7FJZjIPF4IWXyAIouuQZG40OjOpSabSf--U1mJBs0nI-Tg3uRehY0rOKC3zCSNZnmSEpwlLKCl20Ghzs_vrfIDGIbyTYeWcUJLvowOWsTJnpBwh9WTDB3YGKyvDOZY42LZvZLSuwyH29WLJZu4LPI4O1xBBxxVIGphDg1tXg5fReQzGDDBg2-EWokxkJ5tFsOEI7RnZBBiv90P0cnP9PL1LHh5v76eXD4niJY-J0RU3JVOaphXXVEKR1apghhMCMBCuSFFpqrSudV2lRcWZkZxXhNPKFDVND9HFyjvrVQu1hi562YiZt630C-GkFduks2_i1c1FxmhFy2wQXK0Eyrp_BNtEu1Ys2yyWbRZMDEMYJKfrV3j32UOIorVBQ9PIDlwfBM3yghSEZukQnayi2rsQPJhNKUrEcsJ_yE9-f3GT_5ln-g0z4qNT</recordid><startdate>20131128</startdate><enddate>20131128</enddate><creator>Hempel, Susanne</creator><creator>Miles, Jeremy N V</creator><creator>Booth, Marika J</creator><creator>Wang, Zhen</creator><creator>Morton, Sally C</creator><creator>Shekelle, Paul G</creator><general>BioMed Central Ltd</general><general>BioMed Central</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope></search><sort><creationdate>20131128</creationdate><title>Risk of bias: a simulation study of power to detect study-level moderator effects in meta-analysis</title><author>Hempel, Susanne ; Miles, Jeremy N V ; Booth, Marika J ; Wang, Zhen ; Morton, Sally C ; Shekelle, Paul G</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-b585t-fc95f82bc1395c1ae74db72f500ee5f85b079c1bccdcd937952fa5590519f7d13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Bias</topic><topic>Computer Simulation</topic><topic>Effect Modifier, Epidemiologic</topic><topic>Humans</topic><topic>Meta-Analysis as Topic</topic><topic>Methodology</topic><topic>Monte Carlo Method</topic><topic>Randomized Controlled Trials as Topic - standards</topic><topic>Research Design</topic><topic>Risk Factors</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hempel, Susanne</creatorcontrib><creatorcontrib>Miles, Jeremy N V</creatorcontrib><creatorcontrib>Booth, Marika J</creatorcontrib><creatorcontrib>Wang, Zhen</creatorcontrib><creatorcontrib>Morton, Sally C</creatorcontrib><creatorcontrib>Shekelle, Paul G</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Systematic reviews</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hempel, Susanne</au><au>Miles, Jeremy N V</au><au>Booth, Marika J</au><au>Wang, Zhen</au><au>Morton, Sally C</au><au>Shekelle, Paul G</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Risk of bias: a simulation study of power to detect study-level moderator effects in meta-analysis</atitle><jtitle>Systematic reviews</jtitle><addtitle>Syst Rev</addtitle><date>2013-11-28</date><risdate>2013</risdate><volume>2</volume><issue>1</issue><spage>107</spage><epage>107</epage><pages>107-107</pages><artnum>107</artnum><issn>2046-4053</issn><eissn>2046-4053</eissn><abstract>There are both theoretical and empirical reasons to believe that design and execution factors are associated with bias in controlled trials. Statistically significant moderator effects, such as the effect of trial quality on treatment effect sizes, are rarely detected in individual meta-analyses, and evidence from meta-epidemiological datasets is inconsistent. The reasons for the disconnect between theory and empirical observation are unclear. The study objective was to explore the power to detect study level moderator effects in meta-analyses. We generated meta-analyses using Monte-Carlo simulations and investigated the effect of number of trials, trial sample size, moderator effect size, heterogeneity, and moderator distribution on power to detect moderator effects. The simulations provide a reference guide for investigators to estimate power when planning meta-regressions. The power to detect moderator effects in meta-analyses, for example, effects of study quality on effect sizes, is largely determined by the degree of residual heterogeneity present in the dataset (noise not explained by the moderator). Larger trial sample sizes increase power only when residual heterogeneity is low. A large number of trials or low residual heterogeneity are necessary to detect effects. When the proportion of the moderator is not equal (for example, 25% 'high quality', 75% 'low quality' trials), power of 80% was rarely achieved in investigated scenarios. Application to an empirical meta-epidemiological dataset with substantial heterogeneity (I(2) = 92%, τ(2) = 0.285) estimated &gt;200 trials are needed for a power of 80% to show a statistically significant result, even for a substantial moderator effect (0.2), and the number of trials with the less common feature (for example, few 'high quality' studies) affects power extensively. Although study characteristics, such as trial quality, may explain some proportion of heterogeneity across study results in meta-analyses, residual heterogeneity is a crucial factor in determining when associations between moderator variables and effect sizes can be statistically detected. Detecting moderator effects requires more powerful analyses than are employed in most published investigations; hence negative findings should not be considered evidence of a lack of effect, and investigations are not hypothesis-proving unless power calculations show sufficient ability to detect effects.</abstract><cop>England</cop><pub>BioMed Central Ltd</pub><pmid>24286208</pmid><doi>10.1186/2046-4053-2-107</doi><tpages>1</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2046-4053
ispartof Systematic reviews, 2013-11, Vol.2 (1), p.107-107, Article 107
issn 2046-4053
2046-4053
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_4219184
source PubMed (Medline)
subjects Bias
Computer Simulation
Effect Modifier, Epidemiologic
Humans
Meta-Analysis as Topic
Methodology
Monte Carlo Method
Randomized Controlled Trials as Topic - standards
Research Design
Risk Factors
title Risk of bias: a simulation study of power to detect study-level moderator effects in meta-analysis
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T13%3A19%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Risk%20of%20bias:%20a%20simulation%20study%20of%20power%20to%20detect%20study-level%20moderator%20effects%20in%20meta-analysis&rft.jtitle=Systematic%20reviews&rft.au=Hempel,%20Susanne&rft.date=2013-11-28&rft.volume=2&rft.issue=1&rft.spage=107&rft.epage=107&rft.pages=107-107&rft.artnum=107&rft.issn=2046-4053&rft.eissn=2046-4053&rft_id=info:doi/10.1186/2046-4053-2-107&rft_dat=%3Cproquest_pubme%3E1467070143%3C/proquest_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-b585t-fc95f82bc1395c1ae74db72f500ee5f85b079c1bccdcd937952fa5590519f7d13%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=1467070143&rft_id=info:pmid/24286208&rfr_iscdi=true