Thank the fates! The RQF is dead

The previous Australian government, in its ongoing quest to out-mediocre the rest of the world, had instituted a "research Quality Framework", liberally taken from a failed exercise in Britain. Now, the new government has declared it dead. It will not be missed.

More like this

The Indonesian virus sharing impasse is said to be over, and with the dénouement comes some fascinating new information. Many will remember the row started when an Australian vaccine maker took an Indonesian viral isolate and made an experimental vaccine from it (see many posts among those here).…
A recurring theme in the blogosphere is that our reaction to the terrorist threats is disproportionate and fundamentally subversive of our social structure and freedoms. This is usually cast in terms of the rollback of civil liberties, the denial of natural justice and the overweening ambitions of…
Janet Albrechtsen (writing in The Australian, of course) is asked a question by her teenage daughters: Emails started arriving telling me about a speech given by Christopher Monckton, a former adviser to Margaret Thatcher, at Bethel University in St Paul, Minnesota, on October 14. Monckton talked…
tags: Birds in the News, BirdNews, ornithology, birds, avian, newsletter The White-crested Elaenia, Elaenia albiceps, on Texas' South Padre Island. Image appears here with the kind permission of the photographer, Erik Breden, who retains the copyright to the image [larger view and More pictures…

Why do you think the RQF is bad, John? Under the current system, we are rewarded equally for each peer-reviewed publication; so a paper in the Nature or Science is worth the same as a paper in Rivista di Biologia. That's crazy. Something like the RQF, which rewarded quality and not just quantity, seems much better. It provides incentive to get work right, and not just publishable.
BTW, the rumor is that something RQF-like will come in in any case.

Three reasons:

1. It was arbitrary in the way it assigned ranks to journals. While a lot of it was based on responses from the disciplines, so far as I could see the universities themselves did a lot of filtering.

2. It relies on the badly flawed h-index.

3. It is massively work intensive.

I'm not against some kind of qualitative ranking of work, and at the least a book ought to count as more than 3 papers' worth of work, but this particular system was flawed as hell.

I'm not aware of it assigning rankings to journals *at all*. As I understood it, assessors were - in theory - ranking work based on its intrinsic merit. Of course, it is very likely that they would use journals as a proxy for quality, but so far as I am aware that was left up to the assessors. But I could be wrong (that would make three times).

In philosophy, there is a pretty good consensus on journals - at least at the top - so if I am right we would get few perverse results.

By Neil Levy (not verified) on 20 Dec 2007 #permalink