Psychology, in particular its research aspect, has been in crisis for a few years, which does nothing to help its credibility. The problem is not only in the problems in replicating classical experiments, but also in publishing new articles.

The big problem is that there seems to be a prominent publication bias in psychology , that is, it seems that the publication of articles is based more on aspects such as how interesting they may seem to the general public rather than the results and scientifically relevant information they offer to the world.

Today we are going to try to understand how serious the problem is, what it implies, how this conclusion has been reached and whether it is something exclusive to behavioural sciences or there are others that are also at the same crossroads.

What is publication bias in psychology?

In recent years, several researchers in psychology have warned about the lack of replication studies within the field, which has suggested the possibility that there was a publication bias in the behavioral sciences . Although this was something that was coming, it was not until the end of the 2000’s and beginning of the following decade that there was evidence that psychological research had problems, which could mean the loss of valuable information for the advancement of this great, although precarious, science.

One of the first suspicions of the problem was what happened with Daryl Bem’s experiment of 2011 . The experiment itself was simple:

It consisted of an exhibition made up of volunteers who were shown 48 words. They were then asked to write down as many words as they could remember. Once this was done, they had a practical session, in which they were given a subset of those 48 words previously shown and were asked to write them down. The initial hypothesis was that some participants would remember better those words which they were then made to practice.

Following the publication of this paper, three separate research teams attempted to replicate the results seen in Bem’s work. Although they essentially followed the same procedure as the original work, they did not obtain similar results. This, although it would allow some conclusions to be drawn, was reason enough for the three research groups to have serious problems in getting their results published.

Firstly, as it was a replica of a previous work, gave the impression that scientific journals were interested in something new, original, not a “mere copy” of something previous . Added to this was the fact that the results of these three new experiments, not being positive, were seen more as studies that were methodologically poorly done and that this would explain the poor results rather than thinking that, perhaps, the new data represented a new advance for science.

In psychology, studies that confirm their hypotheses and therefore obtain more or less clear positive results, seem to end up behaving like rumours. They are easily disseminated by the community, sometimes without even consulting the original source from which they came or without carefully reflecting on the conclusions and discussions made by the same author or by critics of that work.

When attempts to replicate previous studies that had positive results fail, these replications remain systematically unpublished . This means that, despite having carried out an experiment that confirms that a classic one was not replicable for any reason or motive, as it is not of interest to the journals, the authors themselves avoid publishing it, and in this way it is not recorded in the literature. This means that what is technically a myth continues to be disseminated as a scientific fact.

On the other hand, there are the ingrained habits of the research community, ways of proceeding that are quite criticizable although they are so widespread that a lot of people turn a blind eye: modifying experimental designs in a way that guarantees positive results, deciding on the sample size after checking whether the results are significant, selecting previous studies that confirm the hypothesis of the current study, omitting or ignoring, as if they did not want the thing, those that refute it.

Despite the fact that the behaviour we have just described can be criticised but, as far as possible, understood (although not necessarily tolerated), there are cases of manipulation of the study data to ensure that they end up being published, which could be seen as open fraud and a total lack of scruples and professional ethics.

One of the most savage cases for the history of psychology is the case of Diederik Stapel , whose fraud is considered to be of biblical proportions: he even invented all the data of some of his experiments, that is, speaking in plain language, like someone who writes a fiction novel, this gentleman invented research.

This not only implies a lack of scruples and a scientific ethic that is conspicuous by its absence, but also a total lack of empathy towards those who used their data in subsequent research, making those studies more or less fictitious.

Studies that have highlighted this bias

In 2014 Kühberger, Fritz and Scherndl analysed nearly 1,000 randomly selected articles published in psychology since 2007 . The analysis revealed, overwhelmingly, a clear publication bias in the field of behavioral science.

According to these researchers, theoretically, the effect size and the number of people participating in the studies should be independent of each other, however, their analysis revealed that there is a strong negative correlation between these two variables based on the selected studies. This means that studies with smaller samples have larger effect sizes than studies with larger samples.

In the same analysis it was also shown that the number of studies published with positive results was greater than the studies with negative results , with a ratio of approximately 3:1. This indicates that it is the statistical significance of the results that determines whether the study will be published rather than whether it actually provides any benefit to science.

But apparently it is not only psychology that suffers from this kind of bias towards positive results. In fact, it could be said that it is a generalized phenomenon in all sciences , although psychology and psychiatry would be the most likely to report positive results, leaving aside studies with negative or moderate results. These data have been observed through a review carried out by the sociologist Daniele Fanelli from the University of Edinburgh. He reviewed nearly 4,600 studies and saw that between 1990 and 2007, the proportion of positive results rose by more than 22%.

Is a replica that bad?

There is a mistaken belief that a negative replica invalidates the original result . The fact that an investigation has carried out the same experimental procedure with different results does not mean that either the new investigation is methodologically incorrect or that the results of the original work have been exaggerated. There are many reasons and factors that may cause the results not to be the same, and all of them allow us to have a better knowledge of reality, which in the end is the objective of any science.

The new replicas should not be seen as a harsh criticism of the original works, nor as a simple “copy and paste” of an original work only with different samples. It is thanks to these replicas that a greater understanding of a previously investigated phenomenon is given, and it allows to find conditions in which the phenomenon is not replicated or does not occur in the same way. When the factors that condition the appearance or not of the phenomenon are understood, better theories can be elaborated.

Preventing publication bias

Solving the situation in which psychology and science in general find themselves is difficult, but this does not necessarily mean that the bias has to be aggravated or made chronic. Although in order to share all useful data with the scientific community implies the effort of all researchers and greater tolerance on the part of the journals towards studies with negative results, some authors have proposed a series of measures that could contribute to ending the situation.

  • Elimination of hypothesis testing.
  • More positive attitude to non-significant results.
  • Improved peer review and publication.

Bibliographic references:

  • Kühberger A., Fritz A., Scherndl T. (2014) Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size. PLoS One. 5;9(9):e105825. doi: 10.1371/journal.pone.0105825
  • Blanco, F., Perales, J.C., & Vadillo, M.A. (2017). Can psychology rescue itself? Incentive, bias and replicability. Anuari de psicologia de la Societat Valenciana de Psicologia, 18 (2), 231-252. http://roderic.uv.es/handle/10550/21652 DOI: 10.7203/anuari.psicologia.18.2.231
  • Fanelli D. (2010). Do pressures to publish increase scientists’ bias? An empirical support from US States Data. PloS one, 5(4), e10271. doi:10.1371/journal.pone.0010271NLM