When scientists make important discoveries, both big and small, they typically publish their findings in scientific journals for others to read. This sharing of knowledge helps to advance science: it can, in turn, lead to more important discoveries.
But published research papers can be retracted if there is an issue with their accuracy or integrity. And in recent years, the number of retractions has been rising sharply. For example, in 2023 globally. This marked a new record.
The huge number of retractions indicates a lot of government research funding is being wasted. More importantly, the publication of so much flawed research also misleads other researchers and undermines scientific integrity.
Fuelling this troubling trend is a mentality known in academia as “publish or perish” which has existed for decades. The publication of research papers drives university rankings and career progression, yet the relentless pressure to publish has contributed to an increase in fraudulent data. Unless this changes, the entire research landscape may shift toward a less rigorous standard, hindering vital progress in fields such as medicine, technology and climate science.
A ‘publish or perish’ environment
Universities and research institutes commonly use the rate of publications as a key indicator of research productivity and reputation.
The Times Higher Education Index, which ranks these institutions, assigns , and publications are fundamental to this score.
Additionally, publications are closely tied to individual career advancement. They influence decisions on tenure, promotions and securing funding.
These factors create a “publish or perish” environment, in 1942 by sociologist Logan Wilson.
A growing trend
Recent evidence indicates the constant pressure to generate data and publish papers may be affecting the quality of research and fuelling retractions of research papers.
is one of the largest databases to monitor scientific retractions. , it reveals a growing trend in the number of publications being retracted.
In the past decade, there have been more than 39,000 retractions, and the annual number of retractions is growing by around 23% each year.
Nearly half the retractions were due to issues related to the authenticity of the data. For example, in August that Richard Eckert, a senior biochemist at the University of Maryland, Baltimore, faked data in 13 published papers. Four of these papers have been corrected, one has been retracted and the remainder are still awaiting action.
Plagiarism was the second most common reason research papers were retracted, accounting for 16% of retractions.
Fake peer review was another reason why research papers were retracted.
Typically, when a publication is submitted to a journal, it undergoes peer review by experts in the same field. These experts provide feedback to improve the quality of the work.
However, the use of fake peer reviewers has increased tenfold over the past decade. There has also been an eightfold rise in publications linked to so-called “paper mills”, which are businesses that provide fake papers for a fee.
In 2022, of all publications were from paper mills.
Genuine mistakes in the scientific process accounted for only roughly 6% of all retractions in the last decade.
More pressure, more mistakes
One reason for the surge in retractions over the last decade may be that we are getting better at finding and detecting suspicious data.
Digital publishing has made it easier to detect potential fabrication, and against these dubious practices. No doubt, the current number of retractions is an underestimate of a much larger pool.
But the intensification of the “publish or perish” culture within universities .
Nearly all academic staff are required to meet specific publication quotas for performance evaluations, while institutions themselves use publication output to boost their rankings. High publication counts and citations enhance a university’s position in global rankings, attracting more students and generating income from teaching.
The prevailing reward system in academia often prioritises publication quantity over quality. When promotions, funding, and recognition are tied to the number of papers published, scientists may feel pressured to cut corners, rush experiments, or even fabricate data to meet these metrics.
Changing the model
Initiatives such as the are pushing for change. This initiative advocates for evaluating research based on its quality and societal impact rather than journal-based metrics such as impact factors or citation counts.
A shift in journal policies to prioritise the sharing of all experimental data would enhance scientific integrity. It would ensure researchers could replicate experiments to verify others’ results.
Also, universities, research institutions and funding agencies need to improve their due diligence and hold those responsible for misconduct accountable.
Including a simple question such as, “Have you ever had or been involved in a retracted paper?” on grant applications or academic promotions would improve the integrity of research by deterring unethical behaviour. Dishonest answers could be easily detected, thanks to the availability of online tools and databases such as Retraction Watch.
Over the past 20 years, scientific research has significantly improved our quality of life. Career scientists must shoulder the responsibility of ensuring researchers uphold the values of truth and integrity that are fundamental to our profession. Protecting the integrity of our work is foremost to our mission, and we must remain vigilant in safeguarding these principles.