In recent years, there has been growing concern about the lack of reproducibility in scientific research. Studies have shown that a significant number of published studies cannot be replicated, with different results being obtained in a range of cases. To understand why researchers arrive at different results, a study was conducted involving hundreds of ecologists and evolutionary biologists analyzing given sets of data. The study found a wide range of answers among the researchers.
The study has been accepted by BMC Biology as a stage 1 registered report and is currently available as a preprint ahead of peer review for stage 2.
The problem of reproducibility in scientific research is common across various fields. It is caused by factors such as an over-reliance on simplistic measures of statistical significance, the preference of journals for publishing exciting findings, and questionable research practices that prioritize excitement over transparency, leading to false results in the literature.
Efforts to improve reproducibility, such as “open science” initiatives, have been slow to spread across different scientific fields.
While interest in reproducibility has been growing among ecologists, there has been limited research evaluating replicability in ecology. One challenge is distinguishing between environmental differences and the influence of researchers’ choices.
To assess the replicability of ecological research, researchers focused on what happens after data collection. They asked ecologists and evolutionary biologists to answer two research questions based on given datasets. The responses varied significantly for both datasets.
For the question about the influence of grass cover on Eucalyptus seedling recruitment, there were 63 responses. Some described a negative effect, some described no effect, some described a positive effect, and some described a mixed effect.
For the question about the effect of sibling competition on blue tit growth, there were 74 responses. Most described a negative effect, but only 37 teams considered this negative effect conclusive. Some described no effect, and some described a mixed effect.
The interpretation of these results varied among the researchers involved in the study. Some emphasized the range of estimated effects and the importance of considering small differences in analysis workflow. Others highlighted the diversity of outcomes and the need to understand the underlying biology.
The study suggests three courses of action to address the issue of reproducibility. First, researchers, publishers, funders, and the science community should avoid treating published research as absolute truth and consider each article in context. Second, more analyses should be conducted per article, and all of them should be reported to provide a fuller picture of the results. Third, research publications should include a description of how the results depend on data analysis decisions to improve transparency and aid in interpreting the findings.