Peer review was a popular topic in 2010. Not that it hadn’t been discussed in the media before, but it seems the issue popped up more than ever over the past year. Here, I’ll use three examples among many1 from 2010, which have led to calls for strengthening, “tweaking”, or abolishing the editorial peer review system. The dominant discourses reveal a disconnect both at the level at which peer review is being analyzed and regarding the expectations of the process. Editorial peer review is not a “gold standard”, nor a way of producing scientific knowledge; it is difficult to categorically say whether it “works” or not. It is equally problematic to systematically dismiss editorial peer review as only a basic means of quality control that leaves all judgment to an ad hoc post-peer review process (though this approach is certainly effective under certain circumstances). In order to address concerns about peer review within a specific context, the process itself should be viewed as a set of practices, mainly used to demarcate boundaries (of science as a whole and of individual specialties) and to favour consensus building.
Arsenic, climate and clinical trials…
Following the backlash from the “hype” of NASA’s public relations efforts, there was a major debate over whether the article in question should have published in the first place (or whether it was “worthy” of publication in Science). For others, the problem with this episode, like that of cold fusion 20 years ago, lay in a hasty “passage” to the public sphere. This implicitly means that institutionalized editorial peer review is the solution, not the problem. Other perspectives focused on the self-correcting nature of science, in this case mostly occurring as post-editorial peer review discussions. The blogosphere buzz around this article has indeed been something akin to a sort of extremely “inclusive” form of expanded peer review and is certainly interesting in its own right, especially as one considers the strengths and weaknesses of the “blog” model of peer review. But this “backlash” effect could hardly be considered a model for ensuring scientific accuracy.
Peer review became an issue in climate change science on several levels, such as the free availability of data, the selection of articles for publication and the relatively small errors found in the Intergovernmental Panel on Climate Change (IPCC) reports. For the IPCC, the issue of editorial peer review was primarily boiled down to governance. Informed criticism (both external and internal) has maintained the robustness of the knowledge produced but has called for various changes in the process or suggested new models for review, despite an already “expanded” form currently employed. Interestingly, the IPCC has also been (mildly) chastised for not relying enough on peer-reviewed literature (as opposed to grey literature). Naturally, the “science-for-policy” and the public scrutiny characteristics of this field imply different expectations for review 2, which are not be applicable as a generalized normative view of the process.
A recent New Yorker article on the “decline effect” (a weakening over time of positive correlations being observed in psychology and clinical trials) points the finger at, among other things, peer review’s bias toward positive results. A similar article in The Atlantic from earlier last year which bluntly points out that much of peer-reviewed science is wrong, focuses more on problems with the peer review process than on any unethical behaviour by authors (though this is also an issue). Other accusations of bias, though different in nature, have also recently leveled against reviewers stem cell research.
What does this all mean and where do we go from here?
What strikes me about these three stories (among many) is that (a) peer review itself was not the initial focus, that (b) none of these cases is centred around blatant fraud or errors making it through the peer-review system (whereas in previous years, this was the main concern). Most of all, underlying the debate are perceptions of peer review which examine it in terms of its impartiality, its “transparency” as a system, the norms and practices of groups of peer reviewers, and its ability to generate “sound” and “reproducible” knowledge. Needless to say, while these topics may be related, they cannot be treated en bloc. Similarly, terms such as “reproducibility”, frequently associated (directly or indirectly) with the peer review process, need to be cited within specific social/cognitive contexts or according to any precise mechanisms.
Scholars in the history, philosophy and especially the sociology of science have a major role to play in this issue. Editorial peer review as we know it is relatively recent, has developed somewhat haphazardly not in the same way as grant peer review3. The evolution and institutionalization of editorial peer review over the past half century has much to do with the lack of clarity in its process and actors that we perceive today. Many relatively straightforward—but not simple—questions remain. For instance, how can peer review be understood in the context of “normal” or “revolutionary” science, or even “marginal” areas of science? What can be said about the variability of peer review practices? Different disciplines and specialties will have entirely different conceptions of what editorial peer review is (whereas grant peer review is more standardized). Scholars with expertise in how science “works” can, at the very least, help frame the debate.
There is currently a great deal of research that aims to reconcile different levels of analysis of peer review, though grant peer review is often its primary focus. But perhaps a discussion of what type of scientific knowledge peer review can or cannot generate should not be the main avenue. Another starting point, for instance, is already vast amount of literature examining publication (and citation) practices as central elements of the reward, communication and stratification systems of science. New journals such as PLoS One assert that they “publish all papers that are judged to be technically sound”, putting into question the “subjective” decisions made by “traditional” journals. For one, it makes explicit the need for scientists to promote their work (one could argue that this is already the case de facto). However, there is little historical precedent this type of behaviour on a large scale.
Without resolving the disconnect between “gold standard” and “basic quality control”, peer review has thrived thanks to the Churchill-esque statements of “it’s the worst system… except for all the others”. But beyond a rhetorical strategy, this is revealing as to the self-perception of science and of its practices. It also reinforces the need to understand peer review in terms of the reward and legitimation practices that are associated with science. As Daniel Engber pointed out several years ago in Slate Magazine, one of the problems is that no one really knows whether peer review works or not. This is indeed problematic, but whether or not it works depends on what you think it’s for4.
Beyond issues of trust or epistemology, peer review is (and perhaps should be) about individual and collective strategies. In astrophysics, for instance, reliance on preprint articles has increased dramatically due to the demands of the field. For certain journals elsewhere, the focus may be on certain types of significance tests, the size of error bars, specific methods for validating mathematical models, the parameters of clinical trials or any number of orthodoxies.
Happily, Sheila Jasanoff and others have begun to put the peer review issue in some historical and social perspective, focusing on issues of trust, authority and scientific vs. broader norms of accountability. Without advocating for or against an overhaul of the system, I believe that there is more groundwork to be done in order to foster a more coherent debate over editorial peer review in its various contexts. Avoiding generalizations and focusing on how its “rules” are constructed and implemented are steps in the right direction.
- See also, for instance: Mark Henderson, British Medical Journal, 340, 2010, which focuses on the “anonymity” of peer review. ↩
- Sheila Jasanoff, Science 328, p. 695, 2010 ↩
- John C. Burnham, JAMA 263, p. 1323, 1990 ↩
- Stephen Schneider and Paul N. Edwards, “Self-governance and peer review in science-for-policy: The case of the IPCC Second Assessment Report”, in Miller, C. A., & Edwards, P. N. (Eds.), Changing the Atmosphere: Expert Knowledge and Environmental Governance. Cambridge, MIT Press, 2001 (p. 230) ↩