Psychology research is often questionable, study finds
Kerry Sheridan
MIAMI, UNITED STATES, Kerry Sheridan- Scientific studies about how people act or think can rarely be replicated by outside experts, said a study Thursday that raised new questions about the seriousness of psychology research.
A team of 270 scientists tried reproducing 100 psychology and social science studies that had been published in three top peer-reviewed US journals in 2008.
Just 39 percent came out with same results as the initial reports, said the findings in the journal Science.
The study topics ranged from people's social lives and interactions with others to research involving perception, attention and memory.
No medical therapies were called into question as a result of the study, although a separate effort is underway to evaluate cancer biology studies.
"It's important to note that this somewhat disappointing outcome does not speak directly to the validity or the falsity of the theories," said Gilbert Chin, a psychologist and senior editor at the journal Science.
"What it does say is that we should be less confident about many of the original experimental results."
Study co-author Brian Nosek from the University of Virginia said the research shows the need for scientists to continually question themselves.
"A scientific claim doesn't become believable because of the status or authority of the person that generated it," Nosek told reporters.
"Credibility of the claim depends in part on the repeatability of its supporting evidence," he told reporters.
Problems can arise when scientists cherry-pick their data to include only what is deemed "significant," or when study sizes are so small that false negatives or false positives arise.
Nosek said scientists are also under pressure to publish their research regularly and in top journals, and the process can lead to a skewed picture.
"Not everything we do gets published. Novel, positive and tidy results are more likely to survive peer review and this can lead to publication biases that leave out negative results and studies that do not fit the story that we have," he said.
"If this occurs on a broad scale, then the published literature may become more beautiful than the reality."
Some experts said the problem may be even worse that the current study suggested.
John Ioannidis, a biologist at Stanford University in Palo Alto, California, told Science magazine that he suspects about 25 percent of psychology papers would hold up under scrutiny, about the same "as what we see in many biomedical disciplines," he was quoted as saying.
- Key caution -
One study author who participated in the project as both a reviewer and reviewee was E.J. Masicampo, assistant professor at Wake Forest College in North Carolina.
She was part of a team that was able to replicate a study that found people who are faced with a confrontational task, like having to play a violent video game, prefer to listen to angry music and think about negative experiences beforehand.
But when outside researchers tried to replicate Masicampo's own study -- which hypothesized that a sugary drink can help college students do better at making a complicated decision -- they were not successful.
Masicampo expressed no bitterness, chalking up the differences to geographical factors, and stressing that the experiment showed how complicated it can be to do a high-quality replication of a study.
"As an original author whose work was being replicated, I felt like my research was being treated in the best way possible," she said.
There are ways to fix the process so that findings are more likely to hold up under scrutiny, according to Dorothy Bishop, professor of developmental neuropsychology at the University of Oxford.
"I see this study as illustrating that we have a problem, one that could be tackled," said Bishop, who was not involved in research.
She urged mandatory registration of research methods beforehand to prevent scientists from picking only the most favorable data for analysis, as well as requiring adequate sample sizes and wider reporting of studies that show null result, or in other words, those do not support the hypothesis initially put forward.
Scientists could also publish their methods and data in detail so that others could try to replicate their experiments more easily.
These are "simply ways of ensuring that we are doing science as well as we can," Bishop said.
------------------------------------------------------------------------------------------------------------
The study topics ranged from people's social lives and interactions with others to research involving perception, attention and memory.
No medical therapies were called into question as a result of the study, although a separate effort is underway to evaluate cancer biology studies.
"It's important to note that this somewhat disappointing outcome does not speak directly to the validity or the falsity of the theories," said Gilbert Chin, a psychologist and senior editor at the journal Science.
"What it does say is that we should be less confident about many of the original experimental results."
Study co-author Brian Nosek from the University of Virginia said the research shows the need for scientists to continually question themselves.
"A scientific claim doesn't become believable because of the status or authority of the person that generated it," Nosek told reporters.
"Credibility of the claim depends in part on the repeatability of its supporting evidence," he told reporters.
Problems can arise when scientists cherry-pick their data to include only what is deemed "significant," or when study sizes are so small that false negatives or false positives arise.
Nosek said scientists are also under pressure to publish their research regularly and in top journals, and the process can lead to a skewed picture.
"Not everything we do gets published. Novel, positive and tidy results are more likely to survive peer review and this can lead to publication biases that leave out negative results and studies that do not fit the story that we have," he said.
"If this occurs on a broad scale, then the published literature may become more beautiful than the reality."
Some experts said the problem may be even worse that the current study suggested.
John Ioannidis, a biologist at Stanford University in Palo Alto, California, told Science magazine that he suspects about 25 percent of psychology papers would hold up under scrutiny, about the same "as what we see in many biomedical disciplines," he was quoted as saying.
- Key caution -
One study author who participated in the project as both a reviewer and reviewee was E.J. Masicampo, assistant professor at Wake Forest College in North Carolina.
She was part of a team that was able to replicate a study that found people who are faced with a confrontational task, like having to play a violent video game, prefer to listen to angry music and think about negative experiences beforehand.
But when outside researchers tried to replicate Masicampo's own study -- which hypothesized that a sugary drink can help college students do better at making a complicated decision -- they were not successful.
Masicampo expressed no bitterness, chalking up the differences to geographical factors, and stressing that the experiment showed how complicated it can be to do a high-quality replication of a study.
"As an original author whose work was being replicated, I felt like my research was being treated in the best way possible," she said.
There are ways to fix the process so that findings are more likely to hold up under scrutiny, according to Dorothy Bishop, professor of developmental neuropsychology at the University of Oxford.
"I see this study as illustrating that we have a problem, one that could be tackled," said Bishop, who was not involved in research.
She urged mandatory registration of research methods beforehand to prevent scientists from picking only the most favorable data for analysis, as well as requiring adequate sample sizes and wider reporting of studies that show null result, or in other words, those do not support the hypothesis initially put forward.
Scientists could also publish their methods and data in detail so that others could try to replicate their experiments more easily.
These are "simply ways of ensuring that we are doing science as well as we can," Bishop said.
------------------------------------------------------------------------------------------------------------