Critiquing the Critics of Peer-Review

I wonder why scientists, who require significant levels of validation in work in their own disciplines, make rather un-scientific analyses about scientific practices. In this case, paper publishing and peer review.

Peer review, the blind circulation of research manuscripts among a community of reviewers for assessing editorial and content fit to a journal topic, has been breaking down for some time. There are many reasons why this is the case, including work intensification, increasing scientific specialization, and increasing number of journals. Yes these root cause factors are rarely, if ever, addressed by the proponents of “new” solutions to peer review or scientific publication.  These calls for change have been raised frequently recently. Of course “everyone” knows peer review is broken, a situation especially apparent when you’re at the receiving end of poor review practices. However, scientific reasoning is often tossed out in the pursuit of answers, the typical trap of problem solving without clearly understanding the problem system.

In the same (current) issue of The Scientist that argues for ending peer review as we know it, some scientists (usually in physical sciences) are now proposing that all scientific publication can be managed by the individual scientists. The solution is to post manuscripts and data on their “websites.” This is also proposed as a radical solution, but it is neither radical (it is an old idea) or a solution (the mess it will create is worse than the current system). Read in The Scientist Publish or Post?

A new European-funded initiative is advocating an entirely new system of science publishing, in which scientists avoid the hassles of traditional peer review by taking a quietly radical step: post their results on their websites.

They suggest making research — including formal manuscripts, datasets, presentation slides, and other presentations — available through the web without any sort of traditional peer-review process. That research would then be searchable and citable by the rest of the scientific community at no cost.

What I find consistently missing from such proposals are serious considerations of the richly intertwined social system that has formed around scholarly publishing. Activist scientists treat this issue as if it were a simple matter of problem solving. There is very little thought about the systemic impact of the proposal, based on understanding the underlying causes and drivers for the broken system. Yes, I am accusing these scientists of not having done research in this field, but making proposals based on their own authorship and reviewing experience. We call this first-person design, and sometimes it works, when the product is a “point solution” or an invention. But when the product is a social system, a larger networks of perspectives and stakeholders must be included.

There are so many questions this begs, that publishing systems have figured out, for better or worse. Publishing is not a simple matter of “using the Internet.” Your research and sites may be blocked, in accessible, poorly maintained, poorly ranked. Without social aggregation – not social networking, but a system you have control over – your work may be missed.

There are actually several great ideas here, ideas that my collaborators at U Toronto’s Laboratory for Collaborative Diagnostics have been working on for some time.  Several projects and early research efforts underway have been using an interdisciplinary approach of social sciences, prototyping, and design research to make serious proposals for:

  • Interpretive Collaborative Review – A novel review-generating system based on a multi-perspectival analysis of literature focused on a shared concern among a group of editorial reviewers.
  • BioTIFF - A flat-file continuous storage system for recording and maintaining large image test data, data sets and personal health information, searchable and sustainable.
  • Data-first publishing – Providing a means of publishing and indexing data for collaborative analysis, before reductive analysis and interpretation.

Not that we have the answers, and in fact, I’m writing to suggest that we do not have the answers yet. I just believe the answers being proposed are also quite insufficient. We should be investigating the social systems of publishing and review based on the purposes of publication, not the activity of publishing. That leads to underdetermined solutions.

What is the desired long term outcome for scientists, institution, the labs, and the research sponsor? Do scientists really want to take responsibility for maintaining their own websites in perpetuity? Is the individual scientist really the best unit of publication? What about multiple author situations? Who really owns the research, is it always the individual author? The trending interest in more special issues?

If scientists don’t have the time to peer review today, how are they supposed to maintain publication sites, and maintain standards and indexing? Will there be some support for this? Will they be trained or will it somehow evolve into consistency?

The biggest problem I see is not the means of publication but the exhaustion of readership. Publishing processes help focus quality, and reader attention is the most limited resource.  The sheer volume of new publications today, nearly a million a year in the biomedical and life sciences, means that a very small proportion of papers are ever read, and an even smaller number are cited. Would individual authorship help this problem or make it worse?  Just “publishing on the web” assumes that Google and some to-be-named social networking will solve the problems of indexing, findability, and even quality. I don’t see general purpose internet services as a reliable or responsible answer. We need to create better tools for qualifying publications for the human limitations of readership, and we must do this in partnership with all stakeholders, including libraries, publishers, and technology providers.