In recent years, since the beginning of the 2010 decade, the scientific community has drawn attention to the existence of a crisis of replicability in science, especially in psychology and medicine : the results of many investigations are impossible to replicate or simply no attempts are made to do so.

However, problems related to the confirmation of hypotheses are not the only ones that fall within the scope of the replication crisis, but the latter is of a broader nature. In this sense, the relevance of the falsification of results, particularly in the field of social psychology, and other very significant methodological factors should be highlighted.

The crisis of replicability in science

One of the foundations of the scientific method is the replication of the results . Despite the fact that many people have a marked tendency to take the conclusions of a single study as credible and definitive, the truth is that a hypothesis only becomes truly solid when it is confirmed by several valid studies from different research teams.

In the same vein, negative results, i.e., the refutation of hypotheses, are as important as their verification. However, the proportion of studies that refute approaches seems to have been reduced in science in general; consequently there is a clear primacy of publications that corroborate experimental hypotheses .

Many of the publications that have been made about the replication crisis highlight the magnitude it has taken on in psychology. However, it is necessary to explain that this crisis affects science as a whole and that it also has a particular intensity in the case of medicine. This is due to a series of interrelated factors.

The main causes of this phenomenon

A meta-analysis by Daniele Fanelli (2009) concludes that publication fraud is more common in medical and pharmaceutical research than in other fields. The author suggests that this may be due to the large magnitude of economic incentives for publications or to a greater degree of awareness in these fields.

There are, however, several factors that influence the replicability crisis beyond the explicit falsification of data. One of the most significant is the selectivity of publications: in general, positive and striking results have a greater potential to appear in journals and to provide recognition and money to researchers.

This is why the “drawer effect” occurs frequently, whereby studies that do not support the expected hypotheses are discarded while those that do are selected by the authors and published more commonly. Furthermore, the non-replication of positive studies decreases the risk of hypotheses being refuted.

Other common practices that have similar objectives are selecting a large number of variables and then focusing only on those that correlate, modifying sample sizes (e.g., including subjects until the results are positive), or conducting multiple statistical analyses and reporting only those that support the hypotheses.

Why is it so bad in psychology?

The replication crisis in psychology is considered to date back to the early years of the 2010 decade. During this period numerous cases of fraud involving relevant authors emerged ; for example, the social psychologist Diederik Stapel falsified the results of several publications

A meta-analysis by Makel, Plucker and Hegarty (2012) found that only about 1% of the studies on psychology published since the beginning of the 20th century are replications of previous studies. This is a very low figure since it strongly suggests that many of the conclusions drawn from isolated studies cannot be taken as definitive.

The number of successful independent replications is also low , at around 65%; however, more than 90% of those carried out by the original research team support the hypotheses. On the other hand, works with negative results are also especially uncommon in psychology; the same can be said of psychiatry.

Solutions to the research crisis

The crisis of replicability in psychology and science in general not only compromises the results of a large number of studies, but can lead to the legitimisation of hypotheses that have not been confirmed with the necessary rigour. This could lead to the widespread use of incorrect hypotheses, altering the development of the sciences.

At present, there are many economic interests (and others also related to prestige) that favour the continuation of the replication crisis. As long as the criteria followed for the publication of studies and the dissemination of their results in large media continue to have this monetarist character, the situation will hardly be able to change.

Most of the proposals that have been made to help solve this crisis are associated with rigorous methodology in all its phases , as well as with the participation of other members of the scientific community; in this way, the aim is to strengthen the “peer-review” process and to try to encourage replication efforts.

Concluding

It must be taken into account that in the field of psychology, many variables are worked with, on the one hand, and it is difficult to establish a context in which the starting point is similar to that of another study, on the other. This makes it very easy for elements that are not taken into account in the research to “contaminate” the results.

On the other hand, the limitations of the ways in which it is decided whether there are real phenomena or only statistical phenomena sometimes make false positives appear: the simple fact that the p-value is significant does not necessarily mean that it reflects a real psychological phenomenon.

Bibliographic references:

  • Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE 4(5).
  • Makel, M. C., Plucker, J. A. & Hegarty, B. (2012). Replications in psychology research: how often do they really occur? Perspectives on Psychological Science, 7(6): 537-542.
  • Nosek, B. A., Spies, J. R. & Motyl, M. (2012). Scientific Utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6): 615-631.