![]() |
The HEFCE logo. Please note I am not affiliated with HEFCE in any way! |
http://www.hefce.ac.uk/whatwedo/rsrch/howfundr/metrics/
The report was instigated by David Willetts. Its release will now take place just after the next general election as a result of his recent departure from the home office.
...I'm afraid this was the best GIF concerning politics I could find...
In essence, the report is designed to investigate how new metrics might be applied to future Research Excellence Frameworks. The report has gathered lots of evidence already. HEFCE had an excellent response to a call for evidence from a variety of stakeholders across the community. HEFCE wants the review to be as open and transparent as possible so these have all been made available on the review webpages.
HEFCE are engaging with a variety of metrics across the research system and identifying ways these metrics can be used in measuring quality. One avenue of analysis is how metrics are gathered. Another area of investigation is the dynamics of 'gaming' and in particular how this can be applied to a measure and the effects on the outcome. These are all important considerations in producing a methodology for assessing the suitability of a metric.
What is already clear is the careful balance that must be struck between peer review and metrics when assessing research. Peer review is not a perfect system and some of the problems has been illustrated across numerous case studies in recent times. See the following articles for a discussion of this:
Panel review is also tricky when undertaking a REF. HEFCE are conscious of the fact that panel review is not an idealised version of, nor can it replace the rigors of peer-review. The sheer volume of research that is assessed under REF (within a very short timeframe) is an indication of the scale of challenge faced by the assessors. Similarly the aspects around the application of additional merit are nuanced during assessment. Neither approach could be considered a perfect measure.
You see the difficulty...
When questioned about approaches for measuring the quality of research, the response from publishers has been mixed. PLoS take a nuanced approach. Peer review systems differ between publishers and disciplines and this may explain the mixed feelings on metrics here. It’s difficult to justify a ‘one size fits all’.
An excellent post published on the LSE impact blog by Derek Sayer (Professor of Cultural History at Lancaster University) warns of the potential pitfalls and dangers of measuring quality in research through assessment in this way. In his article he points to the many contradictions of the REF. He believes that whilst metrics may have problems, the panel-assessment processes of the REF that give such great power to individual panel members are far worse when balanced against the impact of assessment on researchers’ careers. Ultimately, he agrees that measuring research quality is fraught with difficulty:
HEFCE have run several workshops to hear about the experiences of different institutions leading up to the 2014 REF. These consultations took place with a range of Universities of different sizes and research specialisms. One point of discussion was how they made use of metrics and whether they found them easy to use. Again they found that support for metrics was split across this group. The participants questioned whether the community currently has the right kind of metrics for certain disciplines (e.g. the arts). Again when considering different disciplines, some were sceptical of the value of metrics in certain circumstances especially for measuring the research quality of an output.
An option that has been considered is devolving greater freedom of assessment to the panels. An interesting point was raised by several VC's is that the approach of using multiple metric measures would make their job of monitoring institutional performance much harder as, although potentially fairer, it would require an in-depth understanding of the functions and outputs of these measurements across disciplines.
Another consideration is to operate assessment at institutional level (through more peer review) and to do away with much of the assessment work of the panels. Again the views of the academic community appear to be split over the implementation of this proposal. HEFCE wants the evolution of research quality measures to take place with the consensus of stakeholders across the scholarly community. Academics, institutions and HEFCE have a shared commitment to the continual improvement to the research base of the UK.
No comments:
Post a Comment