There is intense interest in research impact assessment in Australia at the moment. For Cooperative Research Centres, that’s a good thing because we always do well in terms of impact assessment. The recent report by the Allen Consulting Group showed $14.5 billion in benefits and a boost to national GDP.
The much anticipated ATN-Go8 Excellence in Innovation trial was released in Canberra several weeks ago and again, it was terrific news for Cooperative Research Centres. Of the 20 cases studies listed in the Appendix of the report, seven (35% of the cases) came from CRCs. In the $9 billion Federal Government “innovation pie”, CRCs represent a 1.7% slice, so to have such a large representation of the cases is an outstanding result in anyone’s books. In a follow-up seminar to the ATN-Go8 release, Frank Stagnitti, the DVCR at the University of Ballarat presented some impact case studies on behalf of the Regional University Network: three of Frank’s five examples came from CRCs.
Last week the Australian Research Council released the next iteration of the Excellence in Research in Australia exercise. At a Melbourne conference where he spoke on ERA, CEO of the ARC Aidan Byrne was at great pains to point out that it is not a ranking system. But human nature being human nature, the only pages I saw on the passed-around iPads over lunch were the ranking tables from The Australian. “Did you see CQU’s jump up the ladder? What are they doing?” “Looks like Sydney got its act together this time”.
In my talk the following day, I expressed a number of concerns about ERA and the trial EIA (Excellence in Innovation Australia).
Let’s look at cost: gathering masses of paper from around Universities and having panels of people look at them and make comment is an expensive exercise. Both ERA and EIA are based on this methodology. The fact that we have Universities offering full time “ERA Officer” jobs is scary to me. This is a job that doesn’t teach and doesn’t research, so it adds nothing to the mission of a University. It’s a job to compile data in a certain manner for presentation for the single purpose of reporting to ERA. Every one of the outputs gathered will already have been collected for other reporting requirements in the University. We should be able to capture them at low or no cost.
Undoubtedly, ERA has made us look more to the quality of research in Australia. But at a cost of around $100 million an ERA exercise, we should be seeing around a billion dollars in benefits so far if ERA is returning at the equivalent of research itself. For that money, it should either deliver more or cost less, preferably both.
Because the earlier ERAs were perceived not to take impact sufficiently into account, we have seen the “counterbalancing” exercise of the trial Excellence in Innovation Australia, with about 12 Universities participating. This is where I think Australia needs to tread very carefully. I’ve heard the ATN-Go8 trial exercise variously called an “analysis” and “good stories”. But which was it?
If you are going to do research impact analysis to the satisfaction of a Treasury or a Productivity Commission, it takes much more than validated case studies. The Allen Consulting Group’s independent analysis of the impact of technologies and services coming out of the CRC Program involved the estimation of net present values taking into account the additional costs to the economy of implementing them, the counterfactual case and running them through Monash’s model of the Australian economy to demonstrate the impacts on GDP and investment. It was the third in a decade-long series and the methodology reflected comments on the alternative use of capital by firms made by the Productivity Commission in its 2007 R&D investigations. To me, that’s the sort of work needed if you are going to try and speak the same language as officials and Ministers in Treasury and Finance Departments.
If you are telling stories of research impact to the public, then the EIA trial was a heck of a complicated way to do it. I’ve seen no media on the actual impact stories from the trial to date, only stories about the trial, and those only in fairly specialised places.
I’m not saying we shouldn’t conduct the ERA or EIA exercises.
I’m saying that before we lock ourselves into regular costly assessment systems, we should look very carefully at the processes involved. We should also spend more time and effort learning the lessons of these exercises and looking to the future, rather than evaluating the past. For example, if it’s possible to use a proxy measure that a firm like Thomson Reuters could provide but is collected quickly and easily and it gets us 80 or 90% of the way there, then we should take it. If you’ve got one program like CRCs disproportionately showing up in measures of impact, then we should implement some of their features in other programs – double the average grant length for example to give researchers time to make an impact. If telling the incredible stories from Australian research is important (and it is, three exclamation marks), then let’s employ more people to tell those stories but have them walk around with the researchers and write the stories.
The next ERA is due in 2015. The 2012 exercise improved on 2010. Rather than ending up with a “counterbalancing” EIA, couldn’t we just have a single “excellence and impact” assessment?