Bad news today – we just had a major grant proposal turned down. It’s the same old story – they thought the research we were proposing (on decision support tools for sustainability) was excellent, but criticized, among other things, the level of industrial commitment and our commercialization plans. Seems we’re doomed to live in times where funding agencies expect universities to take on the role of industrial R&D. Oh well.

The three external reviews were very strong. Here’s a typical paragraph from the first review:

I found the overall project to be very compelling from a “need”, potential “payoff’, technical and team perspective. The linkage between seemingly disparate technology arenas–which are indeed connected and synergistic–is especially compelling. The team is clearly capable and has a proven track record of success in each of their areas and as leaders of large projects, overall. The linkage to regional and institutional strengths and partners, in both academic and industrial dimensions, is well done and required for success.

Sounds good huh? I’m reading it through, nodding, liking the sound of what this reviewer is saying. The problem is, this is the wrong review. It’s not a review of our proposal. It’s impossible to tell that from this paragraph, but later on, mixed in with a whole bunch more generic praise, are some comments on manufacturing processes, and polymer-based approaches. That’s definitely not us. Yet I’m named at the top of the form as the PI, along with the title of our proposal. So, this review made it all the way through the panel review process, and nobody noticed it was of the wrong proposal, because most of the review was sufficiently generic that it passed muster on a quick skim-read.

It’s not the first time I’ve seen this happen. It happens for paper reviews for journals and conference. It happens for grant proposals. It even happens for tenure and promotion cases (including both of the last two tenure committees I sat on). Since we started using electronic review systems, it happens even more – software errors and human errors seem to conspire to ensure a worrying large proportion of reviews get misfiled.

Which is why every review should start with a one paragraph summary of whatever is being reviewed, in the reviewer’s own words. This acts as a check that the reviewer actually understood what the paper or proposal was about. It allows the journal editor / review panel / promotions committee to immediately spot cases of mis-filed reviews. And it allows the authors, when they receive the reviews, to get the most important feedback of all: how well did they succeed in communicating the main message of the paper/proposal?

Unfortunately, in our case, correcting the mistake is unlikely to change the funding decision (they sunk us on other grounds). But at least I can hope to use it as an example to improve the general standard of reviewing in the future.

2 Comments

  1. Excellent. Or not.

    The UK Parliamentary Science And Technology Committee is holding an investigation into peer review:http://www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/news/110127-new-inquiry—peer-review/

    The deadline for written evidence has passed, but I rang them and they said that the members will still see any submissions made now (although they might not get full consideration in the report). Would you be interested in submitting written evidence? scitechcom@parliament.uk

  2. Believe it or not Eli sits on a yearly panel where the program manager bans this on the grounds that any mistake will be used to beat him about the ears. Eli puts one in anyhow and then ditches it when the panel is done and sends the review up to the PM.

Leave a Reply

Your email address will not be published. Required fields are marked *