COVID-19: AI tool speeds up scientific review process

13 May, 2020
COVID-19: AI tool speeds up scientific review process
A machine learning tool that may determine the credibility of research could shorten the review period for scientific studies and, potentially, help identify the most promising research on COVID-19.

Assessing the merit of scientific papers could be a challenging task, even for experts. The process of peer review could be lengthy and frequently subjective.

The existence of published studies that researchers have already been unable to replicate has also raised concerns about the review process.

One survey discovered that more than 70% of researchers have failed to reproduce another scientist’s experiments, with more than half failing to reproduce their own research findings. Some have even described this problem as a crisis.

With no consistent solution to find which papers are reproducible and that are not, a lot of the latter continue to circulate through the scientific literature.

To greatly help scientists determine which research is the most promising, a team from the Kellogg School of Management at Northwestern University in Evanston, IL, is rolling out a machine learning tool that takes judgment out from the process and exponentially shortens the review period.

The facts of the model feature in PNAS.

The reproducibility test
Explaining the limits of peer review, Prof. Brian Uzzi, who led this study, says: “The standard process is too expensive, both financially and regarding opportunity costs. First, it takes too long to go to the second phase of testing, and second, when professionals are spending their time reviewing other people’s work, it means they are not in the lab conducting their own research.”

Uzzi and his team are suffering from a sort of artificial intelligence (AI) to greatly help the scientific community make quicker decisions which studies are likely to yield benefits.

One of the main tests of the quality of a report is its reproducibility - whether other scientists replicate the findings that it reports if they perform the same experiments. The algorithm that Uzzi and his team produced predicts this factor.

The model, which combines real human input with machine intelligence, makes this prediction by analyzing what that scientific papers use and recognizing patterns that indicate that the findings have value.

“There is a lot of valuable information in how study authors make clear their results,” explains Uzzi. “What they use reveal their own confidence within their findings, nonetheless it is hard for the average human to detect that.”

The model can detect word choice patterns which may be hidden to a human reviewer, who might instead give attention to the strength of the statistics in a paper, the developers say. There is also a risk that reviewers could be biased toward this issue or the journal that published the paper, or that persuasive words such as for example “remarkable” might influence them.

Minutes rather than months
The researchers first trained the model by using a set of studies which were regarded as reproducible and a set of those known never to be. Then they tested the model on a group of studies that it had never seen before.

They compared the output with that of the Defense Advanced Research Projects Agency’s Systematizing Confidence in Open Research and Evidence (DARPA SCORE) program, which depends on subject experts to review and rate scientific studies. However, on average, the process takes the very best part of a year to complete.

When the team used the model alone, its accuracy was similar to that of the DARPA SCORE, nonetheless it was much quicker, taking minutes instead of months.

In combination with the DARPA SCORE, it predicted which findings will be replicable with sustained accuracy than either method alone. Chances are that scientists will put it to use this way in reality, to check human assessments.

“This tool will help us conduct the business enterprise of science with greater accuracy and efficiency,” Uzzi says. “Now, as part of your, it’s essential for the research community to use lean, focusing only on those studies which hold real promise.”

Application to a pandemic
The team says that the rollout of the model could possibly be immediate, so that it could analyze the raft of COVID-19-related research that is currently emerging.

“In the midst of a public health crisis, it is necessary that people focus our efforts on the most promising research,” says Prof. Uzzi. “This is important not only to save lots of lives but also to quickly tamp down the misinformation that results from poorly conducted research.”

Research is occurring at an unprecedented rate, and policymakers around the world are planning to accelerate clinical trials to locate a treatment or vaccine for the disease. The Northwestern researchers say that their tool may help policymakers prioritize the most promising studies when allocating resources.

“This tool is specially useful in this crisis situation where we can’t act fast enough. It can give us a precise estimate of what’s likely to work rather than work very quickly. We’re behind the ball, which might help us catch up,” concludes Uzzi.
Source: www.medicalnewstoday.com
TAG(s):
Search - Nextnews24.com
Share On:
Nextnews24 - Archive