One of the problems with this debate is that we are all so self deprecatory as
a community that we miss some important things about the big picture.
RCUK are trying to justify their budget to the treasurey and lord Mandelson
(both roles may change in the next 1-2 years:)
about science spending - note Joe Public is not in this debate - we are not being asked to
justify our existence in the wider arena -
my view is that in the last few years,
public appreciation of science has been very high
(note I say "appreciation", not "understanding")
due to efforts of great popularisers like
Attenborough and Jones and Hawking and many others.
I think it would be quite easy to win the fight that
science research is worth doing in public - that's not the problem - the
"adversary" we have to convince is the senior civil service and
the government in power (as Anthony Finkelstein correctly implied, and
Ross Anderson indicated)....
So what are we looking at in the changing evaluation framework...
1. transparency - is each project (or person) worth funding?
2. scale - is the total budget defensible? (the REF)
3. efficiency - are peer review and the REF/QR mechanism
good ways to assess how to allocate money, and which is better...
A Lot of what is going on is we
are being groomed in a concerted campaign
to believe that this is finely motivated,
(this grooming reminds me
of the way marxism, and free market philsophies
were put over on societies at points in history,
or wars are justified to populations) ...
The peer review system already is quite transparent
(if quite expensive) -
What I dont know why the same word is used
in the new project form 2 page impact statement,
as is used in the HEFCE Proposed REF ...
but what is obvious is that impact at the REF level is to do with QR,
so should be assessed on aggregates,
not on individual fine grain, and also
should be assessed on long time scales (not 10-15, but 15-50 years).
There are two obvious reasons for this
1. research high impact events are black swans.
2. however, the rare events considered as some time series
(probably self similar)
are in fact products of a large aggregate of work in reality
(indeed large deviation theory about such time series come
from collection of feedback processes over
multiple time scales) -
the lower impact work
that may have very low biblimetric impact
for example (typically drawn from some zipfian distribution)
are far from irrelevant - they are the background from which emerge
the succesful results - without all the searching,
we wouldn't get the significant events. (this is not simply
about negative results - its about predictability of impact).
What should be done in the REF?
I propose we accept the _collection_ of fine grain data,
but we object to the simple formual used -
the impact needs to be attributed,
and that is where the formula is clearly silly -
attribution of work in research is hard to do,
and the longer the time scale, the wider you have to look -
but we have quite a well known algorithm for attribution -
I quite like things like pagerank,
satnav, cell phones, wii games controller/console,
PVRs with online programme guides, engine management systems,
programmable dishwashers, washing machines, ovens,
etc as examples of CS impact...we should get a big list together and
divy it up amongst us...there's plenty to go around...
p.p.s. at the other extreme
economic impact is rarely large, but some cases can justify an entire programmes
existence - for example, DARPA programme manages used to enjoy telling US congress
that 1 single tax year, cisco paid enough tax to justify the entire ARPA research budget
developing the Intenet (on the order of 600M dollars)...
if you're interested about the problems of research assesment,
one interesting source is the analysis of programme ctte
reviews from a number of conferences - for example, see