³Ô¹ÏÍøÕ¾

Early COVID-19 research is riddled with poor methods and low-quality results − a problem for science the pandemic worsened but didn’t create

Early in the COVID-19 pandemic, researchers with studies about the then-novel coronavirus. Many publications streamlined the peer-review process for COVID-19 papers while keeping acceptance rates relatively high. The assumption was that policymakers and the public would be able to identify valid and useful research among a very large volume of rapidly disseminated information.

Author


  • Dennis M. Gorman

    Professor of Epidemiology and Biostatistics, Texas A&M University

However, in my review of 74 COVID-19 papers published in 2020 in the top 15 generalist public health journals listed in Google Scholar, I found that many of these studies used . in medical journals have also shown that much early COVID-19 research used poor research methods.

Some of these papers have been cited many times. For example, the most highly cited public health publication listed on Google Scholar from a sample of 1,120 people, primarily well-educated young women, mostly recruited from social media over three days. Findings based on a small, self-selected convenience sample cannot be generalized to a broader population. And since the researchers ran more than 500 analyses of the data, many of the statistically significant results are likely chance occurrences. However, this study has been cited .

A highly cited paper means a lot of people have mentioned it in their own work. But a high number of citations is not , since researchers and journals can game and manipulate these metrics. High citation of low-quality research increases the chance that poor evidence is being used to inform policies, further eroding public confidence in science.

Methodology matters

I am a with a long-standing interest in research quality and integrity. This interest lies in a belief that science has helped solve important social and public health problems. Unlike the anti-science movement about such successful public health measures as vaccines, I believe rational criticism is fundamental to science.

The quality and integrity of research depends to a considerable extent on its methods. Each type of study design needs to have certain features in order for it to provide valid and useful information.

For example, researchers have that for studies evaluating the effectiveness of an intervention, a is needed to know whether any observed effects can be attributed to the intervention.

pulling together data from existing studies should describe how the researchers identified which studies to include, assessed their quality, extracted the data and preregistered their protocols. These features are necessary to ensure the review will cover all the available evidence and tell a reader which is worth attending to and which is not.

Certain types of studies, such as one-time surveys of convenience samples that aren’t representative of the target population, collect and analyze data in a way that does not allow researchers to determine whether one variable .

All that researchers can consult. But adhering to standards slows research down. Having a control group doubles the amount of data that needs to be collected, and identifying and thoroughly reviewing every study on a topic takes more time than superficially reviewing some. Representative samples are harder to generate than convenience samples, and collecting data at two points in time is more work than collecting them all at the same time.

papers published in the same journals found that COVID-19 papers tended to have lower quality methods and were less likely to adhere to reporting standards than non-COVID-19 papers. COVID-19 papers rarely had predetermined hypotheses and plans for how they would report their findings or analyze their data. This meant there were no safeguards against to find “statistically significant” results that could be selectively reported.

Such methodological problems were likely overlooked in the for COVID-19 papers. One study estimated the average time from submission to acceptance of 686 papers on COVID-19 to be in 539 pre-pandemic papers from the same journals. In my study, I found that two online journals that published a very high volume of methodologically weak COVID-19 papers had a peer-review process of .

Publish-or-perish culture

These quality control issues were present before the COVID-19 pandemic. The pandemic simply pushed them into overdrive.

Journals tend to favor : that is, results that show a statistical association between variables and supposedly identify something previously unknown. Since the pandemic was in many ways novel, it provided an opportunity for some researchers to make bold claims about how COVID-19 would spread, what its effects on mental health would be, how it could be prevented and how it might be treated.

Academics have worked in a for decades, where the number of papers they publish is part of the metrics used to evaluate employment, promotion and tenure. The afforded an opportunity to increase their publication counts and boost citation metrics as journals sought and rapidly reviewed COVID-19 papers, which were more likely to be cited than non-COVID papers.

Online publishing has also contributed to the deterioration in research quality. Traditional academic publishing was limited in the quantity of articles it could generate because journals were packaged in a printed, physical document usually produced only once a month. In contrast, some of publish thousands of papers a month. Low-quality studies rejected by reputable journals can still find an outlet happy to publish it for a fee.

Healthy criticism

Criticizing the quality of published research is fraught with risk. It can be misinterpreted as throwing fuel on the raging fire of anti-science. My response is that a critical and rational approach to the production of knowledge is, in fact, fundamental to the very practice of science and to the functioning of an capable of solving complex problems such as a worldwide pandemic.

Publishing a large volume of misinformation disguised as science during a pandemic . At worst, this can lead to bad public health practice and policy.

Science done properly produces information that allows researchers and policymakers to better understand the world and test ideas about how to improve it. This involves of a study’s designs, statistical methods, reproducibility and transparency, not the or tweeted about.

Science depends on a to data collection, analysis and presentation, especially if it intends to provide information to enact effective public health policies. Likewise, thoughtful and meticulous peer review is unlikely with papers that appear in print only three weeks after they were first submitted for review. Disciplines that reward quantity of research over quality are also less likely to protect scientific integrity during crises.

Public health heavily draws upon disciplines that are , such as psychology, biomedical science and biology. It is similar to these disciplines incentive structure, study designs and analytic methods, and its inattention to transparent methods and replication. Much public health research on COVID-19 shows that it suffers from similar poor-quality methods.

Reexamining how the discipline rewards its scholars and assesses their scholarship can help it better prepare for the next public health crisis.

The Conversation

Dennis M. Gorman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

/Courtesy of The Conversation. View in full .