Researchers have analysed hundreds of neuroscience studies to determine their "statistical power"

16.04.2013 19:53

NEUROSCIENCE is rarely out of the
news. Just last week the journal
Science carried a paper about
using brain-scanning technology to
decode the contents of dreams.
Cash follows the buzz, too. On
April 2nd President Barack Obama
announced $100m of funding to
kick-start a grand project to map
the activity of every neuron in a
human brain. Meanwhile, the
European Union is throwing a
billion euros at the Human Brain
Project, which hopes to simulate
those neurons on a supercomputer
(the similarity of its name to that
of the famous, and famously
expensive, Human Genome Project
is entirely non-coincidental).
Yet, when perusing individual
studies, sceptical readers are often
left a little uncomfortable: sample
sizes in neurological research often
seem too small to draw general
conclusions. The Science paper,
for instance, studied hundreds of
dreams, but they came from just
three individuals. Now, a group of
researchers have transformed those
niggling doubts into a piece of
solid statistical analysis. In a paper
published in Nature Reviews
Neuroscience Marcus Munafo,
from the University of Bristol, and
his colleagues analysed hundreds
of neuroscience studies to
determine their "statistical power".
Statistical power is a measure of
how likely a study is to discover an
effect--whether a given drug affecs
the brain, say, or whether exposure
to violent video games makes
players more aggressive. Low
statistical power equals a high
chance of overlooking an effect
that is real. This is known as a type
II error, or a false negative,
Confusingly, by a certain quirk of
statistics, low power also makes it
more likely that a result which
appears statistically significant is in
fact a false posite (or a type I
error), down to chance rather than
reflecting any real underlying
effect.
Dr Munafo and his team looked at
49 neuroscientific meta-analyses
published in 2011. (A meta-
analyses is a study that combines
the results of lots of other studies;
in this case, the 49 meta-studies
included results from 731
individual papers.) Their results
were striking. The typical study had
a power of just 21%. In other
words, it has a 79% chance of
failing to spot a real effect. In
some subfields, things were even
worse. Neuroimaging studies which
used MRI scanners to measure
brain volume had a power of just
8%. That average of 21% disguised
a skewed distribution, too: most of
the studies scored between 0% and
20%, although there was s modest
peak in the 91-100% range.
If the researchers' figures are
accurate--and if the 12-month
period they looked at is
representative of neuroscience
research in general--then the
implications are alarming. Bluntly,
much of the published
neuroscientific research is likely to
be reporting effects, correlations
and "facts" that are simply not
real. At the same time, real
phenomena are going unnoticed.
Such worries are not unique to
neuroscience, of course. One of
the study's authors, John Ioannidis,
made his name in 2005 with a
study bluntly called "Why Most
Published Research Findings are
False", which examined similar
worries around medical research.
Why does it happen? Structural
incentives in science make things
worse. As the researchers point
out, small, underpowered studies
are cheap, and good--if you do
enough of them--at generating the
sorts of novel, statistically
significant and seemingly clean
results that get published in
prestigious journals. That
represents a powerful temptation
for scientists, who are judged on
their publication records, and
funding outfits, which want to get
the most bang for their limited
bucks. But it risks polluting the