Peer review is an important component of the process of research. It is central to the assessment of both grant applications and manuscripts intended for publication. It is not a particularly popular process — we have all had the experience of receiving negative reviews, written apparently by an illiterate with no understanding of science, of our skillfully written and insightful grant application or manuscript. Furthermore, the results of some studies testing the effectiveness of peer review are not reassuring. The reason peer review continues to be used in its current form is that there is, at the moment, no clear consensus on an alternative. This, and the occasional misconception about how peer review works in practice, helps to tarnish its reputation. The purpose of this editorial is to help demystify peer review of manuscripts by reviewing some of the relevant research studies and describing how the process is implemented at the Journal of Psychiatry & Neuroscience (JPN).
As pointed out by Jefferson et al,1 editorial peer review is an ill-defined term, and its aims are not always clearly stated. Sometimes it can refer to in-house review, but more commonly, as in this editorial, it refers to opinions gathered from external experts. Jefferson et al1 propose that the aims of peer review “may be categorized as (1) selecting submissions for publication (by a particular journal) and rejecting those with irrelevant, trivial, weak, misleading or potentially harmful content, and (2) improving the transparency, accuracy and utility of the selected submissions.” One of the problems in assessing the various studies of peer review is the variety of study questions and end points. Nonetheless, a Cochrane review of editorial peer review concluded that “At present there is little empirical evidence to support the use of editorial peer review as a mechanism to ensure quality of biomedical research, despite its widespread use and costs.”2 This, of course, does not mean that editorial peer review is ineffective, only that it has not been shown to be effective. Even with a consensus on detailed objectives for peer review, many methodological problems would have to be solved to assess it adequately, and the research would necessarily involve a large number of authors and editors.1 Convincing evidence supporting the effectiveness of peer review is unlikely to appear in the near future. Meanwhile, a number of studies, although often small and not necessarily generalizable, provide guidance on some aspects of the process. Some of these studies are mentioned below.
When a manuscript arrives at the JPN office, it is assigned to 1 of the 2 editors-in-chief. The assignment takes into account the various areas of expertise of the 2 editors, while evening out the workload. A request to have a manuscript dealt with by one or the other of the editors will be honoured if a legitimate reason is given; the feeling that one of the editors is the softer option is not legitimate (and is not true). The first task when a manuscript arrives on my desktop is to read it quickly to determine its suitability for JPN. The most common reason for rejecting a manuscript at this stage is that the topic is purely clinical. JPN aims to publish clinical work that has implications for brain mechanisms and basic science studies that are relevant to psychiatry. Sometimes a manuscript is so obviously flawed that it would be a waste of the reviewers’, editor’s and authors’ time to send it out for review. Reasons for rejection at this stage might include lack of or inappropriate control group, a marked lack of power to achieve the objectives of the study, inappropriate methodology or a negative result that is of no particular interest.
The next issue is the choice of potential reviewers. The ideal reviewer is someone who has a broad knowledge of the topic, but is well versed in all methodological details including statistics, and who has the altruism to take the time to study the manuscript carefully and write a report that is helpful to the editor and authors. Despite of the lack of credit given to those who review grant applications and manuscripts, such people do exist. Some reviewers do an excellent job, some do a capable job, some are careless and very few are appalling. Fortunately, JPN has not yet had any experience with dishonest reviewers (or rather, is not aware of any such experience), although some suspect that competitive pressures are increasing unethical behaviours by reviewers.3 JPN keeps a highly confidential database of active reviewers, along with information on how quickly they respond and the editors’ opinions of the quality of their reviews. There is also a list of those we know never review and a thankfully short list of those who agree to review but never get around to producing their review. We use our database and personal knowledge to identify possible reviewers, taking into account the need not to burden anyone with too many requests. If our resources are inadequate, a MEDLINE search often helps to identify active high-quality researchers in the area. Often researchers who are identified in this way will have a Web site that provides additional evidence of suitability.
Once suitable potential reviewers have been identified, the next question is the best way to contact them. A randomized trial has shown that asking potential reviewers if they are willing to review a manuscript before sending it to them results in a higher turn-down rate than just sending the manuscript, but reviewers who are asked first complete their reviews faster.4 JPN asks first. Sometimes those asked decline but suggest other potential reviewers. It is important to check out these names. Recently, a distinguished researcher suggested someone else, but a MEDLINE search turned up no publications associated with the name given. This was explained when a search of the distinguished researcher’s department Web site showed that the suggested reviewer was a first-year graduate student. MEDLINE and university Web sites are a great asset in identifying suitable reviewers and have helped broaden the scope of the JPN reviewer database. In the early years of the journal, most of our reviewers were Canadian. Now, active reviewers are split almost evenly between Canada, the United States and the rest of the world.
It is easy to identify suitable and willing reviewers for some articles; for others, it can take up to 8 or more requests for a single article. To speed the process, emails are sent out to a number of people at the same time, and if all respond positively, so much the better. Some reviewers are slow in producing their reviews. A comparison of the effectiveness of various methods for prodding tardy reviewers (i.e., phone, fax and email) showed no differences in response rates,5 so JPN uses email when possible because of its convenience and low cost.
There is continuing discussion on the desirability of openness in peer review. Some journals remove the title page with the names of the authors before sending articles out for review. JPN does not. As discussed by Godlee,6 attempts to conceal authors’ identity from reviewers are unlikely to succeed. In 4 trials on the effect of blinding, reviewers who were not told the identity of authors were able to guess correctly in 23%–42% of cases. High-profile authors are almost certainly identified more easily, introducing a bias when attempts at blinding are used. Godlee6 has argued that the names of both authors and reviewers should be know to each other. The arguments in favour of open review (i.e., revealing the names of the reviewers) are (1) its ethical superiority, (2) lack of important adverse effects, (3) feasibility in practice, and (4) potential to balance greater accountability for reviewers with credit for the work they do. The second point is debatable. Studies of open review suggest that it increases the number of reviewers who decline to review, the likelihood that reviewers will recommend acceptance and the time taken to produce a report.6 Like most journals, JPN does not reveal the names of reviewers, although some do, including the BMJ and the BioMed Central journals. As yet, there is insufficient evidence to say that open review does not allow important flaws to pass unmentioned during the review of some manuscripts, given the possibility that the reviewers may be reluctant to criticize when their identity is known. However, open review has its appeal, and JPN may change its policy if future research demonstrates its benefits.
In a study of the reliability of assessor’s rankings during peer review, kappa was 0.1, indicating poor inter-rater reliability.7 Reviewers agreed best on whether to reject a paper. This is certainly consistent with my own experience. In a recent study of peer review of grant applications, when the opinion of external reviews diverged, the final rating of the committee was usually closer to the lower score given by the external reviews.8 This is also consistent with my experience with manuscripts. Often one reviewer will point out flaws that the others missed. A half page of mindless praise is not unknown and is as unhelpful as a listing of typographical errors and a recommendation to reject.
Making the final decision on the basis of the reviewers’ reports and the editor’s own reading of the article is one of the most interesting duties of an editor, because it is entirely subjective. The reviewers’ reports certainly aid but do not determine a decision; that is, favourable reviews do not guarantee acceptance of a paper, because other factors such as priority and importance also influence the final decision. On the other hand, we all know of papers that are citation superstars but were originally rejected by a journal because the manuscript was thought to be uninteresting. Some of the most difficult decisions are about manuscripts that are sound but seem dull or are potentially important but flawed in some way. Ultimately, the success of JPN will depend on the ability of the editors to make the right decision in these situations.
Over the past 50 years, and even over the past decade, the language of the scientific literature has become steadily more obscure.9 This problem is compounded by the relatively poor writing skills of some authors. A common dilemma for editors is how much effort to put into rescuing a potentially interesting manuscript that is buried under verbiage and jargon and infused with the passive voice. Personally, I am not willing to rewrite a paper for authors; I am more willing to put in time detailing minor changes needed in manuscripts for those whose native language is not English and for those who come from institutions without an established tradition for research, than for the rest. Clarity of language is secondary to scientific content, but it can certainly influence the fate of some manuscripts.
Few manuscripts are accepted without changing a word. Sometimes a request for revision will precede even conditional acceptance. If the changes requested are minor, the revised manuscript does not need to go back to the original reviewers. In a letter to the authors accompanying the reviewers’ reports, I try to indicate the main issues of concern. I also mention points that reviewers seldom mention. For example, I check that the article mentions how consent was obtained and what ethics committee approved the study for clinical research or what animal care committee approved the study for animal work. A recent study of high-profile journals found that 9% of papers reporting the results of clinical trials made no mention of informed consent or ethics committee approval.10 Apparently, ethics is still not an integral part of research for some researchers, reviewers and editors. Reviewers’ comments may be edited before being sent to authors, for some reviewers combine insightful comments with factual errors — and a few have curious perspectives about statistics. Knowing my own limitations when it comes to statistics, I consult a statistician if I have any doubts in that area. This is an area of particular concern given that serious statistical errors are common even in some high-profile journals.11
The responses of authors to requests for revisions are varied. The opening of the letter often includes thanks to the reviewers for their helpful comments, but the inclusion of this sentiment is not always accompanied by a constructive response to those comments. Some authors seem to do the minimum they think will satisfy the editor and reviewers, and some fail to respond to all the issues raised. These responses often lead to a second request for revisions. Surprisingly, reasoned rebuttals of the reviewers’ comments are not frequent. This might suggest that some authors feel that editors do not listen to reason. On the contrary, I want the papers in JPN to be as good as possible, and if the author makes a more cogent argument than the reviewer, I am happy to accept the views of the author. The Lancet has an official appeal mechanism for rejected papers. About 5% of rejected papers are the subject of an appeal, and just over 10% of these are finally accepted for publication.12 JPN does not have an official appeal process, but the editors are willing to listen to any argument authors put forward concerning their manuscripts and the reviewers’ comments on them.
Little research has been done on authors’ perceptions of peer review. However, a study of authors submitting papers to the Annals of Emergency Medicine found that author satisfaction with peer review was only modest.13 Authors of rejected manuscripts were least satisfied with the time to decision and with the letter explaining the editorial decision. In fact, author satisfaction was more related to acceptance of the manuscript than to the quality of the reviews.
Time to publication is a matter of great concern, and in my experience many authors focus more on the time it takes to review a manuscript than on the time it takes them to revise it. This is an illustration of the well established principle that a controllable stressor is less stressful than an uncontrollable one. The editors of JPN are well aware of the need for timely review and publication. JPN remains competitive in this area and is always striving to do better.
How well are the editors of JPN performing in the peer review process? That is for authors to decide. The overall quality of manuscripts published in JPN is improving, leading to a steadily increasing impact factor, and this suggests we are doing something right. A much more important question, however, is how the peer review system overall is serving the scientific community. Here, the answer is less encouraging — in fact, it is discouraging. All those who publish the results of their research should read a recent review by Altman.11 He summarizes some of the studies that have identified problems in published research and concludes that “research papers commonly contain methodological errors, report results selectively, and draw unjustified conclusions.” Of particular concern are the studies that identify examples of incorrect practices that persist despite published warnings. It seems that the combined knowledge base of reviewers and editors is not always adequate. What can be done about this distressing situation? Altman11 makes several suggestions, including the use of post-publication peer review and placing greater emphasis on methodological review. Post-publication peer review is implemented by many journals through the publication of letters. A few journals such as BMJ and CMAJ use their Web sites for rapid responses to articles. One strategy that is not successful in improving peer review is written feedback from editors to reviewers. In 2 randomized studies, this intervention produced no improvement in the quality of reviews; for poor-quality reviewers, there was a trend towards a negative effect.14 In some situations, guidelines are helpful. For example, a recent study found that journals that promote the Consolidated Standards of Reporting Trials (CONSORT), which were developed to improve the suboptimal reporting of randomized controlled trials, demonstrated superior reporting of trials.15 However, persistent inadequacies in reporting remained. Although standards such as CONSORT are well suited to randomized controlled trials, with their limited types of design, they are not appropriate for many areas of research.
Although more research on peer review is certainly desirable, improvements in the standards for scientific literature will occur only when there is better and more consistent training in research methodology. Not all graduate and residency programs require trainees to take courses in research methodology, and I am not aware of any formal attempts to teach research trainees how to review papers and grant applications. Surely, there should be formal training in a procedure that is so fundamental to the process of research. As far as editors are concerned, improvements should occur through survival of the fittest. Those that implement peer review well will be rewarded by the success of their journals.
Footnotes
Medical subject headings: editorial policies; peer review, research; periodicals; publishing; research; writing.
Competing interests: None declared.