The Case for Blind Analysis

October 9, 2015

Could blind analysis of data — meaning that an investigator or computer program obscures data values or labels, or both, and that, more generally, as much analysis as possible is done “in the dark” in relation to expected results — help decrease bias towards certain research findings? Robert MacCoun, a professor of law at Stanford University, and Saul Perlmutter, the Franklin W. and Karen Weber Dabby Chair in physics at the University of California at Berkeley, say yes in a new essay in Nature that’s getting a lot of attention, including on Twitter. The authors say that blind analysis is commonplace in several physics subfields but that it holds lots of potential for the biological, psychological and social sciences, as well — the latter two of which especially have weathered recent data legitimacy scandals. 

“Many motivations distort what inferences we draw from data,” say MacCoun and Perlmutter, who is the 2011 Nobel Prize winner in physics. “These include the desire to support one's theory, to refute one's competitors, to be first to report a phenomenon, or simply to avoid publishing 'odd' results. Such biases can be conscious or unconscious. They can occur irrespective of whether choices are motivated by the search for truth, by the good mentor's desire to help their student write a strong Ph.D. thesis, or just by naked self-interest. …Working blind while selecting data and developing and debugging analyses offers an important way to keep scientists from fooling themselves.” 

+ -

Expand commentsHide comments  —   Join the conversation!

Opinions on Inside Higher Ed

Inside Higher Ed’s Blog U

Back to Top