by Massimo Pigliucci
A recent piece by Scott Jaschik in “Inside Higher Education” pointed out what a number of my colleagues have been thinking for a while now: the peer review system for scholarly journals doesn’t work very well, needs to be reformed, and really ought to take radical advantage of new technologies. There is, of course, going to be quite a bit of resistance to any change coming from the usual quarters, beginning with older academics who still think of social networking in terms of meeting colleagues after work for a martini (well, okay, nothing wrong with that), administrators who are used to the simple (and simplistic) bean counting operations for tenure and promotion made possible by the current system, and journal publishers who make a ton of money while adding next to nothing in value to people’s publications (after all, they don’t pay for the research, don’t pay the writers, and don’t pay the editors and reviewers — which of course doesn’t stop them from charging an arm and a leg to university libraries).
Of course, since the new technologies are making an overhaul of the system possible, and since there is widespread frustration with the current modus operandi especially among younger faculty, change will happen one way or another — witness the rise of open access and online journals that bypass traditional publishers. It’s only a question of which paths to take, and that’s where the conversation gets interesting.
The most radical suggestion mentioned in the Inside article is the one by Aaron J. Barlow, associate professor of English at the City University of New York, where I work. Barlow is quoted in the article as saying that “peer review — in the sense that people work and a consensus may emerge that a given paper is important or not — doesn’t need to take place prior to publication.” He is, of course, right and as a matter of fact most peer review has always taken place after publication. A lot of bad or simply irrelevant stuff gets published and ends up augmenting someone’s c.v. by a line or two (good for promotion and tenure!), but then dies the common death of much academic scholarship: complete lack of citations by anyone other than the author.
The question that Barlow is raising is whether it wouldn’t be better to skip the preliminary step — the pre-publication filter — and simply leave everything to the community at large. I am sympathetic to that position, particularly because as author, editor and reviewer I have seen my share of unseemly behavior, gender and racial biases, personal vendettas, and so on that certainly don’t belong anywhere within a scholarly environment. But I think pretty much everyone agrees that we already have far too much pyrite to sift through in order to find the gold nuggets, and I shudder as to what would happen if anyone were suddenly able to claim “scholarship” by simply posting their papers on the web and ask people — anyone, not just the relevant expert community? — to comment, “+1” or “like.”
This is the same problem that has been faced by the publishing and journalism industries. These days anyone can self-publish a book at the click of a button, and anyone can set up an online newspaper with free or cheap software and access to a server. But I doubt these new technological possibilities will spell the demise of editors, publishing houses and newspapers like the New York Times, for the simple reason that these “classic” outlets do exercise a very valuable (if flawed, incomplete, sometimes biased) function of filtering a lot of distracting or poor quality nonsense (as the NYT’s famous tagline says, “all the news that’s fit to print,” or to pixellate, as the case may be).
Another approach commented on in the Inside piece is the one currently pursued by Cheryl Ball, the editor of an online journal on rhetoric and technology called Kairos, and an associate professor of English at Illinois State University. Her journal engages the entire editorial board in a lengthy discussion of every submitted paper, at the end of which an editor is assigned to coach the author on how to revise the manuscript to reflect the consensus of the board. This makes the system much more transparent (the author knows that all editors participated in the discussion, no anonymity on either side) and obviously immensely constructive from the point of view of the author and the community at large. But I seriously doubt this sort of model can be expanded to the whole industry. I edit a small online open access journal in philosophy of science, and even with our low number of yearly submissions it would be impossible to get my editorial board to do what Ball has been able to accomplish with hers. Again, the problem being that there are too many authors out there, and that far too high a proportion of submitted papers is simply not up to even minimum standards, or would require a huge amount of work to get there (not to mention, of course, that — again — editors and reviewers are not paid for this, nor do they get much concrete credit from university administrations for engaging in what they do).
I do not know what the solution is, and I suspect that we will see over the next few years increased experimentation on the part of younger editors to either ameliorate the problems with the current system or to overhaul the thing altogether. Some journals already make the author, not just the reviewers, anonymous, to minimize biases (it is well known, for instance, that women and minorities get fewer papers accepted if the reviewers know their names, and that the effect disappears if authorship is kept anonymous). Others publish all submitted papers that are technically correct — meaning that are written in an intelligible manner and include all the necessary documentation — while leaving to readers to judge the intrinsic value of the authors’ findings and opinions. We certainly are on the cusp of a technologically driven revolution in academic publishing, but just as in the already mentioned cases of book publishing and journalism, it remains to be seen exactly what will be left standing and what will have arisen anew once the storm has passed.
This is all very interesting Massimo. My concern is indirectly related. More than ever before you have people with science degrees and PhDs after their name writing books and articles targeted at the general population. How can we get more peer reviews for things of this nature? I worry about someone like Sam Harris getting into bed with someone like Newt Gingrich and instituting a program along the lines of the 1984, Brave New World, mad scientist ideas for controlling people that Harris outlines in his book the Moral Landscape (ML). Of course, as you, H Allen Orr and Kenan Malik all pointed out in your reviews, what Harris is talking about isn’t science. It’s a technology for controlling people; and as Malik points out, it claims to be science because Harris wants to co-opt the authority of science. But most of the reviewers and most of the people talking about this book don’t understand that. Books like this can only exist because the public is pretty much scientifically illiterate. I can imagine people like Michael Sshermer and Jerry Coyne and all the enthusiastic readers of ML getting behind Newt and Sammy and helping them to transform our very flawed and conservative government into a fascist nightmare.ReplyDelete
But the stakes needn’t be that high. I never would have known Steven Pinker was using meaningless data to support his claims in The Blank Slate had I not read H Allen Orr’s review of it. Most of the other reviewers didn’t know enough to see the books problems.
Your comment was so interesting I googled for a link to the Orr/Sam Harris interview you alluded to. Here's the link for others who might have been similarly intrigued.Delete
[I hope that I have the right one.]
This is a big topic and I completely agree that changes need to be made but I am not sure how radica they should be. There are many aspects to this issue of peer reviewing. For once, there are way to many articles published in any discpline and it's impossible to keep up with everything. There is also a huge burden on reviewers who are asked to review many papers and they do it for free. One thing that I would like to see change but doubt it will ever happen, is to see the track record of every article. Many articles are rejected from a journal with good reason and detailed reviewed, just to be accpeted without any change in another journal, where the reader have no clue that it was harshly criticized elsewhere.ReplyDelete
One good thing that does happen in recent years is that many new journal accept now only short papers, and the review process is much faster than before. You can also accecc most article the day they appear online and don't have to wait for the hard copy (if you really need one at all).
I do believe that the scientific community need safe gaurds and publishing everything online without any screaning is a recipe for disaster. I think that in order for the peer review to change, the all academic model needs to change and not rely so heavily on publications.
I think you are 100% correct in your prediction of a slow adoption rate of new technologies by academics.ReplyDelete
As I've worked with journal editors to learn about their peer review process, I have noticed systematic disincentive for scholarly journals adopting new technologies, which is rooted in the indirect career benefit that such technologies offer. Scholarly journal editors build their personal careers through their own publications, while editing and reviewer for journals is considered an essential but clearly de-emphasized component of tenure and hiring. Also, the responsibility of journal editor is almost always a part-time job, so benefits of technology are really only experienced as part-time benefits for the editor. Similarly, most journals' home departments and home institutions do not directly benefit from successful publications either in terms of reputation or financially (the exception being the top ~10% of journals, measured by distribution or number of subscriptions). Scholarly publishers, on the other hand, advance their full-time individuals careers and company interests when they convince journals to use their software and their publication/distribution services, and thus have more time and a more direct incentive to mobilize technology for the status quo, compared to editors who have less incentive to use technology to overturn the status quo.
All this is to say that I think you're right that change will come, but my impression is that the speed of adopting technological benefits for scholarly publishing will remain slow until the value-add is so large that scholars see immediate benefits to their careers or for their home institutions, or when new digital publishers challenge old publishers by sharing the profits with the journals directly.
Every part of the methods of science have to be subject to constant improvement and that includes Peer Review. But please, whatever you do, don't just make a free for all. Can you imagine the mess Creationists, climate change deniers and other fringe movements would do with that?ReplyDelete
There has to be some pre-publication filtrer so that the most balant crap doesn't get spread like wildfire.
Very interesting topic. I completely agree with anonymizing authors - no idea why this is not already done in my area. Admittedly there are some cases in which I could deduce their identity from the topic, but it would help especially in cases of up-and-coming colleagues.ReplyDelete
I completely agree with the need for more online only not-for-profit journals - if they can get an impact factor/become renowned enough that they count for annual performance reviews and job interviews, of course. As things are, it would be career suicide to publish in no-name journals.
But the suggestion that peer review should take place after publication? I have read about that idea elsewhere too. They asked, why not just publish everything on your personal blog and have the reader suggest improvements? Ye gods. The answer is so obvious, isn't it? If I think of the pre-peer-review, pre-editing quality of some manuscripts I was given to review, I shudder to think they would be published in that state. (Or at all, in the case of the last one I did.) Also, does anybody seriously think that the authors could be bothered to actually make the needed improvements suggested by readers if the publication has already taken place? Dream on.
And that is before considering on how many blogs I would have to hunt for relevant "publications" on my study group or methodology. As things are, I need a new issue alert for half a dozen journals to have a good overview of major developments in my field, and if somebody publishes a new method in Bioinformatics or Systematic Biology I know to take it more seriously than if they just rambled about it on a blog. Would be nice to keep it that way.
People could be interested in the peer review "series" run by Nature (UK) since 2006 (albeit an interested party) and Scientific American, EcoEvoLab, Kevin Zelnio's blog. Also the London School of economics and political science runs a blog on the issue, including collaterals.ReplyDelete
By the way, I believe that after the peer review is completed, the reviewers should be acknowledged at some way, even adding them as co-authors. Many reviewers vastly improve the article and spend a lot of time analyzing the article. This is one way to encourage reviewers to do a good job.ReplyDelete
We need something like Wikipedia. With the interest level for certain areas of science, it would be good to have anyone who reads an article review it, giving it a rating and as detailed a written review as desired. But also provide a rating for the reviewer indicating their credibility and credentials. This way, you can tell if the negative reviews are by specialists in the area or uneducated idiots. Maybe even eliminate anonymity all together, which would also allow people to look at who's praised and who's criticized a particular article. In some areas with very few readers of an article, the politics behind the review might be apparent to others in the field if they knew who said what.ReplyDelete
Beth (not Mark, just using his account)
That is not entirely practical. At least in my field the reviewers usually prefer to remain anonymous, especially when they have to criticize the papers of influential people - I hope I do not have to explain why. Of course, if they do not have a lot of negatives to say, if they feel secure enough relative to the influence of the author, or if they are even on good personal terms with the author, they may waive the anonymity, but that is rather the exception.
So what we do is we usually write "...and two anonymous referees for helpful comments on an earlier version of the manuscript" at the end of our acknowledgements, and the journals publish a list of the names of all that year's reviewers in the last issue of every year.
Yes, I know it's not practical in some fields. I think, it should be an option though. The main problem is to find a way to encourage people to review articles and have some incentives to do so. One other possibility was to give industrious reviewers an advantage in reviewing their articles or simply pay them.Delete
By the way, APA does not allow even thanking the reviewers anymore at the end of the article.
I should add that, in my opinion, if a reviewer has enough to add that they deserve a co-authorship, then the article was by definition not fit for submission and should have been rejected outright. You earn a co-authorship with a substantial contribution to study design, evaluation and manuscript writing. For suggesting something on the lines of streamlining the discussion, using another statistical test or removing a superfluous figure you get an acknowledgement, and that is appropriate.ReplyDelete
I honestly do not get the feeling that there is a real lack of willing reviewers in my field. Everybody knows that you do this in the expectation that others will do it for you, and it seems to work. Payment is a nice idea, but I'd rather reduce the costs of publishing and accessing published articles instead of increasing them.
Sounds to me like this is a design problem.ReplyDelete
My advice: a group of researchers concerned with this problem should form a group that, with the help of some design researchers, could literally re-design the peer review process.
I'd be interested in helping, in that regard, as I'm both an engineer and a design researcher.
As you, and the comments, make clear, we have a lot of work to do before we will be effectively making use of current technological possibilities for scholarship and review (to say nothing of taking advantage of what the future may bring). I've been accused of being a "technological determinist" in saying that blind peer review is dead, but that's not my intent. Blind peer review dies as we become more adept as using the tools at hand... for the old tools then prove redundant and even inhibiting and so are discarded. It is not the technology that is forcing this change, but our own desires, our own struggles to create better scholarly products. At this point, we haven't become skillful enough with what's already available for many to be comfortable with a wholesale discarding of prior blind peer review but, as sites like academia.edu become more and more sophisticated, we will find new ways of sorting the specious from the important--and will find ourselves doing it as "naturally" as we now type.ReplyDelete