Painting a more accurate picture of scientific findings

Painting a more accurate picture of scientific findings
- November 13, 2015
- Cognitive scientist Vandekerckhove receives NSF grant to develop tool that will make the research process more accurate
The criteria used to decide which academic studies get published and thus reach the
masses have become a concern in the field of scientific psychology and more broadly
in academic research, says Joachim Vandekerckhove, UCI cognitive sciences assistant
professor.
“There’s an alarming overrepresentation of ‘positive findings’ in the literature,”
he says. “Academic papers are far more likely to be published if the results are statistically
significant – a concept in statistics that’s often misunderstood.”
Vandekerckhove, who has a courtesy appointment in the Department of Statistics at
UCI, explains that the process statistical significance testing is quite controversial
and produces a “devastating threshold for experimental results to see the light of
publication.”
Called “publication bias,” the issue means that only studies which show a surprisingly
large effect make the cut for publication and thus become the accepted literature
and findings on the topic.
“This is very problematic because large effects can easily happen by chance. There
could be, say ten studies that found no effect, but one that did, and that one is
more likely to be published,” he says. “The result is that published studies are not
representative of the real world that scientists study every day, and that skews the
research and data on the topic.”
After a recent attempt to reproduce the findings from several major journals in psychology,
a large international team concluded that less than 40% of those results were robust. Another team tried to replicate a phenomenon that had been reported in more than
40 experiments in 15 papers, but was unable to in spite of testing 1,600 participants. Vandekerckhove attributes this lack of reproducibility partly to the embattled null
hypothesis significance testing procedure itself, which sets a low bar for scientific evidence, but mostly to publication bias causing flukish results to be published. “It's nearly
impossible to know how many studies did not obtain a large enough effect to be published,
and were relegated to academic file drawers.”
A mathematical psychologist who specializes in Bayesian statistics, Vandekerckhove
has received a $260,000 grant from the National Science Foundation to develop a meta-analytic
tool for more accurately measuring and undoing such bias in the process. The tool
will be based on a Bayesian model averaging technique he calls “Bayesian bias correction.”
Its impact will be most apparent in psychology while also branching to the social,
health and medical sciences where his approach can be used to identify the effect
of publication bias, confirm or falsify null hypotheses, and more accurately estimate
effect size. He plans to make his tool available through publications, conferences,
workshops, conferences, teaching, tutorials, and most importantly – software available
freely online.
“By making the method accessible, this tool will enable all researchers to add statistical
mitigation of publication bias to their meta-analytic toolkit, thereby making their
work more accurate and reflective of current findings,” he says.
Funding for this work began in September and will run through August 2018.
Share on:
connect with us: