Peer review week: Panel discussion at Cambidge University Press
On the 17 August 2017, I participated in a panel discussion about peer review organised by Cambridge University Press (CUP). This post summarises some of the questions and discussion that I wrote in preparation for the discussion. The panel discussion was filmed and a 30 minutes video will be produced as part of the Peer Review Week.
The panel discussion video has now been published
The official CPU Peer Review Week page is here.
Note that in the text below, I will systematically refer to the dissemination of research outputs and explicitly avoid talking about publishing research papers. This is for two reasons: (i) we mustn’t limit ourselves to papers (or books) but promote or more diverse set of outputs that better represent the actual work more and (ii) publishing is most often done by publishers that have conflicting interest due to their commercial nature.
About me
My name is Laurent Gatto (I’m lgatt0 on twitter). I am a principal investigator in the Cambridge Systems Biology Centre, where I lead the Computational Proteomics Unit. I am a computational biologist, and my research focuses on high-throughput biological data and the development and application of computational methods and software to shed new light on biological data and biological processes.
I also consider myself an open and digital scholar. I value every aspects of open science, from open and FAIR (Findable and Accessible and Interoperable and Reusable) data, open and collaborative software development, open access, open peer review (when appropriate), fast dissemination of research through pre-prints, …
My views on peer review will reflect my interests and area of research: open and collaborative research, and open dissemination of a variety of research outputs, including papers (not as much books) as well as data and software.
What does good peer review look like to each panellist?
In my opinion, good peer review should
- Support the dissemination of sound and valid research
- Highlight areas that need or could be improved
- Provide constructive comments
The focus being on validity (as opposed to novelty, relevance, …) and I think that some emphasis should be put on the data, software and methods underlying the claims to promote and support better reproducibility of, and thus greater trust in the research.
Good peer review should be also multidisciplinary. Given the increasing multidisciplinary nature of a lot research, peer review has to follow the trend. I don’t think peer review can still claim it’s gold standard status if peer review relies on a limited number of reviewer that often won’t possess all necessary skills to cover the multidisciplinary nature (and that includes technical expertise in terms of data, data processing, software, …) of the research.
Transparency would also be a feature of good peer review, as transparency leads to greater trust. I don’t necessarily think that transparency implies open/public peer review.
Timely peer review is important, with the reservation however that speed shouldn’t reduce quality and depth of the review. That’s where pre-prints play an important role, so that one can disseminate research independently of a possibly lengthy peer review process.
What do publishers do to support good Peer Review and what could they do better?
I personally haven’t seen many publishers do much for peer review. Three spring to mind:
- PeerJ has a nice interface to enter your peer review and they explicitly offer the possibility to post reviews (speaker as a peer reviewer here);
- F1000Research and they open review/commenting system (reviewer and author);
- the consultative peer review of eLife, although I haven’t personally experienced it yet.
But I am not sure that publishers should be those to drive peer review. I would very much prefer active researchers to play the key role here. Read more about my views on publishers role in peer review below.
Are current practices of peer-review appropriate and sustainable in the future?
I think that serious improvements is needed. Research processes and outputs (in terms of quantity and complexity) have increased, and continue to do so, and pressure to publish is hitting researchers, in particularly early career researchers (ECR) hard. Traditional peer needs to adapt.
One aspect that I think is generally accepted is that peer review doesn’t scale. There are too many research papers and in some fields, individual papers draw from many disciplines and skills that can’t be covered by 2 reviewers. We know peer review isn’t perfect, and these conditions will limit its scope. So I think that it is important to accept that peer review, even if it has been considered the gold standard for many years, doesn’t deliver to the same extent anymore, and that it is important to consider other models, such as systematic submission of pre-prints and their acceptance as first-class research outputs (many funding bodies do so) and post-publication peer review.
I am of the opinion that there isn’t necessarily a single model that would fit all disciplines and types of publications (journals, books, data, software, …), that a combination of peer review models might be necessary (for instance open, single and double blind), and that it is important for us to experiment and gather data on how best disseminate and review our work.
Does peer-reviewing benefit the reviewers, or is it just for the benefit of the authors and end-readers?
Peer review should benefits the author in helping them producing better research outputs, but that’s certainly not universal.
The benefits for the peer reviewer are that they learn about some research before it’s published (although that’s not necessarily true with pre-prints). Another important benefit is that by peer reviewing, one becomes known by the more senior peers/editors; it’s a way to become part of an exclusive clubs.
There are also big benefits for the publishers - peer review is very low cost for them but is used to promote the perceived quality of their publications (irrespective of the actual average quality of the review), which in turn is expected to sustain their business.
Readers will benefit from peer review as long as it improves the quality of the output without delaying it too much.
What are the roles of open vs blind review?
Open to debate. I prefer my reviews to be open, but there are also situations where blind review has advantages (in particular for ECR and under-represented minorities).
I have never participated in a double blind peer review, but I am pretty sure it wouldn’t be too difficult to the reviewer to identify the group the work originates from.
Why hasn’t post-publication peer-review taken off? With the pressure for rapid online publication, should post-publication peer-review be taken more seriously?
I think that the main reason it hasn’t taken off is lack of incentives. It is still seen as a free, altruistic service outside of the traditional and rigid academic system.
Could traditional pre-publication peer-review survive without publishers mediating it?
Why not?
I don’t see publishers as particularly important players in the dissemination of research papers (things might be somehow different for books, although the lack of open access policies for books is a problem). The problems I see are that publishers deal only with a limited number of research outputs (research manuscripts and books), have failed to support and promote data sharing and reproducible research for many years (there is a slow start among a few now), have promoted ill-informed metrics to promote their business, and are driven, at least partly, by market shares and profit.
There’s a spectrum of offenders, of course, and some publishers that genuinely try to innovate in favour of the dissemination of research. From my perspective, the commercial aspects of publishers haven’t had the best impact of dissemination of research outputs, and there is an argument to be made that scholarly communications shouldn’t just be open, but non-profit too.
One experiment I would like to see is for publishers to invite authors to submit their pre-prints. I have heard of such cases but I don’t know if this is something that is done more systematically by some journals or publishers.
Gender bias in Peer review
There is a bias, and I believe it doesn’t only affect women, but more generally any under-represented minorities. I am no expert on the matter but feel it is important to acknowledge the issue and make all possible efforts to reduce any biais, both as part of the peer review process as well as part of our efforts to promote open science.
Future of Peer Review
I hope the future of peer review will be linked to the future of research dissemination: open and transparent, focused on faster and broader dissemination of our work.
Some additional post-panel notes
During the panel discussion, Monica Moniz from CUP mention an approach that I find very interesting, which is journals scanning pre-print servers and invite authors to submit their work. I have only heard few such cases on twitter but I think this would be a very nice thing that I would be happy to see. I publishers do that, it would be great for them to publicise it.
Publishers get involved at the very end of the process, once the advertisement for the research is ready. Often, when it comes to dissemination of the outputs of that research (data, software), and the review and improvement of the work, it is too late. More openness and transparency are features that we should thrive to initiate as early as possible. Monica mentioned that there were efforts for publishers to get involved early on. While I am sure there is room for collaboration on such initiatives, I think it is important for the research community to remain independent from publishers. History has demonstrated that commercially driven interests of most commercial publishers have harmed the dissemination of research outputs. Publishers shouldn’t be given even more power and control by getting pretending to support researchers throughout the whole research process (see for example here and here). Publishers could however get involved in supporting (financially) and promoting (using) community-led initiatives and platforms.
When it comes to submission for peer review, there are also some very simple things publishers could do right away: spare us that silly and stupid formatting, and make sure that the manuscripts we review are in a format that suites reading and reviewing by putting the figures in the text - as far as I understand, submission of figures separately is only for the benefit of the publishers way down the line, if the paper gets accepted.