Can you spot a fake? New tool aims to help journals identify fake reviews

Story Source

Chris Heid

Fake peer reviews are a problem in academic publishing. A big problem. Many publishers are taking proactive steps to limit the effects, but massive purges of papers tainted by problematic reviews continue to occur; to date, more than 500 papers have been retracted for this reason. In an effort to help, Clarivate Analytics is unveiling a new tool as part of the release of ScholarOne Manuscripts, its peer review and submission software in December, 2017. We spoke to Chris Heid, Head of Product for ScholarOne, about the new pilot program to detect unusual submission and peer review activity that may warrant further investigation by the journal.

Retraction Watch: Fake peer reviews are a major problem in publishing, but many publishers are hyper-aware of it and even making changes to their processes, such as not allowing authors to recommend reviewers. Why do you think the industry needs a tool to help detect fake reviews?

Chris Heid: Although the evidence is clear that allowing authors to suggest reviewers increases the chances of peer review fraud, there are still significant numbers of journals that use this as one of many methods to find qualified reviewers. We estimate that about half of the journals using ScholarOne Manuscripts continue to allow authors to add recommended reviewers during submission despite the risk.

The reason that journals don’t completely lock down these suggestions from authors, or limit profiles to verified institutional address, is that journals continue to struggle to find peer reviewers. According to our analysis of five years of peer review trends on ScholarOne journals, the average number of invitations sent to reviewers for research articles has almost doubled in the last five years.

Instead of trying to eliminate all risk and make the process even slower for peer review, journal publishers take a calculated risk and rely on human intervention to mitigate it. This adds both time to the overall process, and costs for the publisher to staff extra background checking. This means peer review is slower and costs publishers more for every article.

This tool’s goal is to improve a journal’s reputation by simplifying the management of a process, which relies on hundreds or even thousands of individual stakeholders. Even though the vast majority of peer reviews are legitimate, the reputational risks are very real for publishers. Why continue to work based solely on trust and human efforts when technology can automate this for us?

Clarivate Analytics is leading the charge on multiple fronts to provide the tools and information needed to combat fraud and improve the peer review process from end to end.

For example, by the end of the year, journals can use Publons Reviewer Locator/Connect (final name undecided) — the most comprehensive and precise reviewer search tool — to help identify the right reviewers, assess their competency, history and availability, contact them and invite them to review.

Recognition through Publons helps motivate reviewers to do a thoughtful and efficient job. The fraud prevention tool follows the submission of the review report to flag potential fraud.

RW: Can you say briefly how the tool works? What it looks for, etc? Anyone can spot a reviewer that’s not using an institutional email address, so what other qualities help signify a review is fake?

CH: The presence of a non-institutional email or absence of a Publons reviewer profile with verified review history are not fool proof for identifying peer review fraud. The fraud prevention tool evaluates 30+ factors based on web traffic, profile information, submission stats and other server data, compiled by our proprietary algorithm, to find fake profiles, impersonators and other unusual activity. This happens multiple times throughout the submission and review process.

By themselves, these factors may not trigger an alert, but combined with other actions, they can increase the risk level of a submission. From there, it is up to the journal editor and/or publisher to determine the next steps. In the long run, this tool will help to reduce the amount of retractions by highlighting issues during the submission process, instead of after publication.

RW: How can journals and publishers get access to the tool? Will there be a fee?

CH: Because the integrity of published research is at risk due to peer review fraud, Clarivate is offering this as a core, free feature in the next ScholarOne release (December 2017). Journals may request the tool to be activated in the interface at any time. The tool can also be configured to the report access levels by role for each individual journal.

RW: Have you tested the tool’s effectiveness? Do you have any data on its rate of success, as well as false negatives or positives?

CH: The tool relies on alerts based on the right combination of factors and leaves the decision to the journal editor or publisher. This is similar to alerts a bank may issue about potential fraud. For example, if you receive an alert about unusual charges on your account, it could be legitimate if you’re on vacation or it could indicate actual credit card theft.

Clarivate actively worked on this capability for the past year, continuing to balance and refine the approach with feedback from publishers who are managing this risk every day. Refinements were made based on feedback including tool sensitivity and user interface.

Early testers indicated that a number of alerts resulted in direct action, including the rejection of a paper that was already accepted but unpublished, and a re-review of another paper by an editor and external reviewer. Once the feature is live in December, we expect additional refinement through feedback tools.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

Leave a Reply

20 years of retractions in China: More of them, and more misconduct

What types of researchers are most likely to recycle text? The answers might surprise you