Long Live Peer Review – Expand and Differentiate

Photo: Elisa CB via Unsplash

In today’s blog in PRIO’s series marking this year’s Peer Review Week, Pavel K. Baev reflects on his own experiences reviewing and being reviewed and the challenges posed by unclear expectations on reviewers. He suggests that a partial solution may lie in a clearer delineation between different types of review.

What can be done to restore the confidence in the mark “peer-reviewed,” which on occasion graces publications that are clearly below par of academic rigor and integrity?

Frustration with the current confusion in peer review processes is a sadly familiar feeling for me. I usually perform 12-15 reviews a year, and find myself on the receiving end of 4-5 review processes, and while I do take this commitment seriously (and so decline about as many requests for review as I agree to do), at least half of evaluations of my work are superficial (often accepting with no need for changes) or unhelpful in focusing on what is not covered by the analysis (instead of what is) or sharply opinionated.

The demand for publishing in open access journals in the intransigent Plan S has definitely deepened this confusion, as many new journals (not to mention the proliferation of predatory publications) have rushed to answer this demand, often with very relaxed approaches to the long-established norms of academic evaluation. The problem is certainly wider than just the journal market, and concerns book proposals, particularly edited volumes, and electronic publishing, particularly institutional platforms and blogs. In the environment of information overload and proliferating fake news, the need for reliable peer review is greater than ever, and the task of overcoming the confusion about the real value of the “peer-reviewed” stamp looms large for international academia. We cannot count on the noble tradition to uphold itself.

In the top category, the journals that put maximum value on their academic reputation can declare their policy of, for instance, providing three double-blind reviews by scientists with full professor competence. A tall order, for sure, but making editorial boards into evaluation mechanisms (and not just a list of big names, as is often the case) with clearly defined procedures and responsibilities could help. In the bottom of the pyramid, think-tanks may find it useful to establish that their web publications are reviewed anonymously by staff members and affiliates (the “Order from Chaos” blog produced by the Brookings Institution is the exemplar for me). Clear explanation of rules and categories of peer review would then be expected from every journal – and used for making judgements on the cogency of articles and posts.

This suggestion cannot address the issue of poor-quality peer reviews, but then, as I have mentioned, it is advanced only as a part of the solution. Another part could be rewarding timely delivery of solid evaluations, but paying money doesn’t feel right and goes against the tradition. An Amazon gift card for books with symbolic value of $US 25-50 can be a nice gesture of appreciation – and a sufficient stimulus, and least in my book.

Coming back to the issue of differentiation, I would argue that while the Norwegian practice of dividing academic journals into Level 1 and Level 2 groups is inevitably controversial (suffice to mention the absence of International Affairs among the latter group), it has proven to be useful, so the rigor of peer reviewing could also be measured and definitely by more than two categories. Such a measure could hopefully help in reducing the confusion regarding the true value of the peer review process and in giving readers a better indication of the reliability of findings in a publication that has gone through peer review.

 

Pavel K. Baev is a Research Professor at PRIO. His research focuses on Russia’s foreign and security policy. 

Share this: