Loading…
When decision heuristics and science collide
The ongoing discussion among scientists about null-hypothesis significance testing and Bayesian data analysis has led to speculation about the practices and consequences of “researcher degrees of freedom.” This article advances this debate by asking the broader questions that we, as scientists, shou...
Saved in:
Published in: | Psychonomic bulletin & review 2014-04, Vol.21 (2), p.268-282 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The ongoing discussion among scientists about null-hypothesis significance testing and Bayesian data analysis has led to speculation about the practices and consequences of “researcher degrees of freedom.” This article advances this debate by asking the broader questions that we, as scientists, should be asking: How do scientists make decisions in the course of doing research, and what is the impact of these decisions on scientific conclusions? We asked practicing scientists to collect data in a simulated research environment, and our findings show that some scientists use data collection heuristics that deviate from prescribed methodology. Monte Carlo simulations show that data collection heuristics based on
p
values lead to biases in estimated effect sizes and Bayes factors and to increases in both false-positive and false-negative rates, depending on the specific heuristic. We also show that using Bayesian data collection methods does not eliminate these biases. Thus, our study highlights the little appreciated fact that the process of
doing
science is a behavioral endeavor that can bias statistical description and inference in a manner that transcends adherence to any particular statistical framework. |
---|---|
ISSN: | 1069-9384 1531-5320 |
DOI: | 10.3758/s13423-013-0495-z |