Excellence in research for Australia

Executive Summary


The Cooperative Research Centres Association Inc. (CRCA) is pleased to provide feedback to the

Excellence in Research for Australia (ERA) Initiative’s Consultation Paper. This submission is focused on

providing some insights that will address the proposed approach “Indicators of Success in Applied

Research and Translation of Research Outcomes” as presented on Page 8 of that Consultation Paper, and

in particular the reference to

“…measures of excellence in applied research and translation on research outcomes…”.

The Consultation Paper notes that while there are a number of well established indicators available in

relation to research activity and intensity and research quality, developing indicators of excellent applied

research and translation of research outcomes is a more complex task.

The CRCA is acutely aware of the many difficulties associated with impact articulation through its work on

establishing the impacts of the Program and through its involvement with CRC Program application

processes and performance reporting systems. These perspectives and knowledge have been utilised in

the development of this submission.


The CRCA is the representative body for the organisations operating within the Australian Government’s

Cooperative Research Centres (CRC) Program. The purpose of the CRCA is to promote science in

general, with a particular focus on the future growth of the CRC Program.

The CRCA is an independent body, funded by fees paid through voluntary membership. The CRCA

Constitution states that only bodies classified as “Cooperative Research Centres” by the Australian

Government are eligible to be members of the CRCA. The current membership comprises all 58 CRCs.


The CRC Program was established in 1990 by the Hawke Government with the aim of changing the

culture of industry to shift from looking to specific short term problem solving research, to taking a longer

term, strategic approach to investment in research.1 Over the course of its 18 year existence the CRC

Program has met that aim and improved the effectiveness of Australia’s research effort through bringing

together researchers in the public and private sectors with the end users. The CRC Program links

researchers with industry and government with a focus towards research application. The close

interaction between researchers and the end users is the defining characteristic of the Program.

Moreover, it allows end users to help plan the direction of the research as well as to monitor its progress.

Since the commencement of the Program, there have been ten CRC selection rounds, resulting in the

establishment of 168 CRCs over the life of the Program that have operated across Manufacturing, ICT,

Mining & Energy, Agriculture & Rural Based Manufacturing, Environment, and Medical Science &

Technology sectors.

Reflecting its broad areas of activity, the CRC Program draws funding and in-kind resources from a wide

range of sources. Displayed below is the resourcing profile for CRCs in 2006-07.

1 Myers, Rupert. Changing Research Culture, Australia – 1995. Report of the CRC Programme Evaluation Steering Committee, Aust

Gov’t Publishing Service, Jul 1995.

Feedback on the Excellence in Research for Australia Initiative, Consultation Paper CRC Association

2006/07 Total Resourcing Profile

3.1 The Performance of the CRC Program

The conventional definition of a CRC is “a company formed through a collaboration of businesses and

researchers. This includes private sector organisations (both large and small enterprises), industry

associations, universities and government research agencies such as the Commonwealth Scientific and

Industrial Research Organisation (CSIRO), and other end users. This team of collaborators undertakes

research and development leading to utilitarian outcomes for public good that have positive social and

economic impacts.” 2 However this definition only tells a part of the story. As the Program has grown

and matured, further benefits have emerged, including:

  • CRCs assemble multidisciplinary teams from across research providers to address end user driven research. They collaborate across all sectors (Industry, Academia, State Government, Consumers and Industry Associations) and create a critical mass in their field.
  • CRCs provide companies, including multinationals, with the unique and attractive proposition of being able to deal through one organisation (the CRC) that can assemble the best teams in Australia to develop the technology that the company needs, managing the process professionally to deliverables and gearing it with funds from the Commonwealth and research providers who are sharing the risks, and the returns.
  • CRCs are managed to deliver impacts not just papers, and are held to account to deliver. The stability of funding provides certainty for the research partners in particular and also for the end-user partners. The overall activities are actively managed by the CRC management team and Board to maximise the national benefits. This includes terminating, redirecting or accelerating projects in a way that is not part of the culture of most other programs.
  • CRCs provide a mechanism for realising unanticipated commercial opportunities, i.e. in cases where technologies have applications beyond the interests of the commercial partners, the CRC can pursue these through the creation of spin off companies, licenses etc.
  • CRCs play an important role in bridging the gap between discovery research funded by NHMRC and ARC grants and the requirements of industry for commercialisation-ready innovations.
  • CRCs encourage innovation through their interaction and reach with SMEs (for example, the CRC for Spatial Information interacts directly with over 55 SMEs).

2 www.crc.gov.au


Feedback on the Excellence in Research for Australia Initiative, Consultation Paper CRC Association


  • A CRC is neutral and un-aligned and so can provide a central focus from which grows collaboration.
  • CRCs provide research management skills and discipline. This helps ensure the research is managed to a high standard.
  • CRCs foster “hands-on” learning. Although they are heavily focused on postgraduate education, and thereby providing training for very highly skilled professionals, CRCs are involved, to differing extents, at all levels of the education and training system.

In the 2006 study on the economic impacts of the CRC Program commissioned by the Australian

Government3, fifty examples were included of economically quantifiable beneficial applications of CRC

research. In these solid, quantified examples, only the clearly measurable components of the outcomes

were included in the calculation of the net economic impact of the Program. Looking only at these clearly

quantifiable impacts, the study showed that as a result of each dollar invested in the CRC Program,

Australian Gross Domestic Product is cumulatively $1.16 higher than it would otherwise have been (had

the money instead been used for tax reductions) and Total Consumption is cumulatively $1.24 higher

than it would otherwise have been (had the money instead been used for tax reductions). It is important

to note that Gross Domestic Product and Total Consumption are two critical indicators of the economic

welfare of the Australian community rather than being measures of the private returns to CRC


Since its inception the CRC Program has been regularly and meticulously reviewed. The success of the

Program has been recognised not only within Australia but also internationally as the CRC Program has

been researched, emulated and even copied by a number of other nations.


4.1 The Challenges

A threshold issue in the measurement of excellence in applied research and the translation of research

outcomes is whether the assessment will focus on assessing the level of ‘activity’ relating to applied

research and research translation or on assessing the ‘impacts’ of applied research and research


4.1.1 Measuring ‘Impact’ rather than merely ‘Activity’

Measuring ‘activity’ can be done relatively cheaply and with a heavy reliance on quantitative

performance metrics. If the assessment is focused on process, a range of activity measures can

be used to rate the performance of different research groups. Such measures could include

things such as:

presentations to end users of research

exhibitions and/or performances

commercialisation activity measures such as patenting, licensing of IP and formation

of spin-off companies

number of research contracts entered into with end users of research.

While these indicators may provide evidence of the extent to which a research group is engaging

in activities designed to promote the beneficial application of research, they do not, however,

3 https://www.crc.gov.au/HTMLDocuments/Documents/PDF/CRC_Economic_Impact_Study_Final_121006.pdf

Feedback on the Excellence in Research for Australia Initiative, Consultation Paper CRC Association


provide evidence of the actual ‘value’ to society that is resulting from the applied research or

research translation activity.

Measuring ‘impacts’, by contrast, is a more complex and costly undertaking and will necessarily

require a greater use of expert judgement in performance assessment.

4.1.2 Measuring the flow of the ‘Impact’

The key challenge in assessing the impact of research is to convey the story of how a research

group’s inputs have flowed through to delivery of an end impact for the community. Establishing

a clear, verifiable, quantifiable and causal link between particular funded R&D and specific final

impacts for the community is an inherently difficult process due to issues such as:

time-lags involved in the translation of research outputs into final outcomes for

society may be considerable. It often takes time for the true quality and value of

research to become apparent

difficulty in attributing outcome ‘effects’ to particular research ‘causes’. The quality of

research, the extent to which the knowledge is diffused to those in a position to use

the knowledge to generate impacts, and the ability of research users to extract full

value from it will all influence the final impact of research

disentangling the contribution of research performed in Australia from research

performed elsewhere when assessing the impact of research

the fact that outcomes often require many non-research inputs for the ‘value’ to be

realised from the application of research

the lack of a contractual ‘paper-trail’ in cases where public domain knowledge (such

as that in academic publications) resulting from R&D may be taken up by many end


some research outcomes with commercial potential are kept secret and are hence

both unidentifiable and unmeasurable

difficulties in attaching a ‘value’ to outcomes in the environmental, health and social

spheres where the outcomes are ends in themselves rather than means to deliver an

economically quantifiable value.

4.2 The Dangers of utilising Proxy Measures

There is always a temptation to base evaluations of research impact on what can be easily measured

rather than on what is important but difficult to measure.

Unfortunately, very few informative proxy indicators are available for either measuring or projecting long

term research impacts. For instance, measures such as patents held, licences executed and spin-off

companies formed are sometimes used as proxies for the likely future economic ‘impact’ of research or

the ‘excellence’ of research translation performance. The use of such measures, or even the use of more

sophisticated measures for commercialisation performance such as the turnover of spin-off companies or

the value of licensing revenue, while perhaps of some predictive value for research that is highly targeted

at commercial outcomes, are however inappropriate as measures of the likely eventual economic, social

and environmental impact of research.

4.2.1 The skewed nature of Volume Measures

Volume measures of research usage, such as the number of patents, licenses or spin-out

companies formed based on research, are not even likely to be accurate indicators of the likely

eventual narrowly economic ‘value’ of the research usage. This is due to the skewed nature of

Feedback on the Excellence in Research for Australia Initiative, Consultation Paper CRC Association

value creation from research. A relatively small number of patents, licenses and spin-out

companies account for a disproportionately high amount of the total economic value from such

commercialisation activities, so simply counting these metrics does not provide a sound indicator

of the future value that will result from them.

An example of the skewed nature of the ‘value’ from commercialisation activities is;sans-serif,pspan style=psociety may be considerable. It often takes time for the true quality and value of provided by

the distribution of royalty and fee income of the University of California (UC). In the United

States, UC is the largest generator of revenue from the licensing of technology. In financial year

2007, UC generated US$97.6 million in royalty and fee income derived from 1,592 inventions. Of

this income, the top 25 commercialised inventions generated US$73.9 million, or 75.7 percent of

total income. The distribution of income amongst the top 25 revenue generating inventions is

further skewed toward the highest earning inventions. In 2007, the top five inventions generated

US$ 47.7 million, or 48.9 percent of total income.4

Another example of the skewed value from commercialisation activity is demonstrated in a 2003

report commissioned by the Australian Institute for Commercialisation which found that only

around one out of every one-hundred high-growth technology-based companies started in

Australia has reached a revenue level of over $50 million per annum.5

A real danger in selecting such volume metrics as proxies for the impact of research translation is

that these will be the metrics that research groups will focus on delivering – even if that is at the

expense of generating actual economic, social or environmental value from the research. For

example, in some cases patenting a new technology may be a less appropriate mechanism for

protecting the intellectual property than keeping it commercial-in-confidence would be, but if

patents are counted and ‘rewarded’ then the tendency may be to patent as many discoveries as

possible. This is just one of many potential perverse incentives that use of inappropriate

intermediate outcome measures can give rise to. Similarly, sometimes the greatest community

value from research will be delivered from wide dissemination of the research to end users to

encourage rapid uptake, rather than from protecting the intellectual property and attempting to

capture commercial returns from it.

4.3 The potential of “Repeat Business” as a single measure of Benefit Delivery

Perhaps the best (although still limited) single measure that a research group can provide as a proxy

indicator for delivery of benefits is evidence of ‘repeat business’. If an end user of research (whether

government or industry) provides ‘repeat business’ to a research group, funding multiple projects over a

period of time or repeatedly using the outputs of the research group’s activities, this is evidence that the

end user believes that the usage of the research group’s activities is delivering a beneficial end impact. If

they did not believe this to be the case they would not provide the ‘repeat business’. In this case, even if

the impact can’t be specifically articulated, a claim can be made that impact is clearly occurring.

The concept of ‘revealed preference’ is also relevant here and suggests that consideration of repeat

business can also allow judgements to be made on the merits of one research group against another

operating in a similar field of research. Revealed preference refers to a situation where people’s

judgement on the merits of one thing against another can be observed through the people’s voluntary

choices and behaviour. If people are free to act in a range of ways, and assuming that people are in

general rational and aware of their own best interests, when people choose one thing over another, this

can be seen as evidence that they believe the thing they have chosen is most likely to best serve their

interests. Hence, if a research user is free to purchase services or use outputs from a range of research

providers in a given field, if the research user regularly chooses to purchase services or use outputs from

a particular research group this indicates a belief that this delivers them a better result than the available

alternative choices.

4 University of California, Technology Transfer Annual Report 2007.

5 Allen Consulting Group, 2003, The economic impact of the commercialisation of publicly funded R&D in Australia, report for AIC.

Feedback on the Excellence in Research for Australia Initiative, Consultation Paper CRC Association


Despite providing some useful evidence of impact performance, it should be noted that the ‘repeat

business’ indicator is not able to provide conclusive evidence of the actual ‘value’ that users place on

research or the extent to which they value the research of one group over other groups.


The lack of simple proxies for research impact (which was a clear finding of the extensive stakeholder

consultations that occurred during the development of the now abandoned Research Quality Framework

proposed by the previous Government) has important implications for how research impact can be

assessed within the ERA initiative.

Given the lack of useful data proxies for past or potential future research impact, a system that is

attempting to assess the impact of applied research and research translation, as opposed to simply

measuring activity, will need to be flexible and rely upon what are in effect ‘evidence based stories’ of the

impacts generated by a research group. This would entail a greater use of expert review in the

assessment of excellent applied research and translation of research outcomes within the ERA initiative

than is required in either the assessment of research activity and intensity or research quality.

5.1 The CRCA Guidebook

The CRCA has recently developed a Guidebook6 for its members in how to articulate the impacts that

their work generates. The key elements of this framework, and the guidance provided to CRCs in using

the framework, are outlined below. If it is desired with the ERA initiative to assess the impacts of applied

research and research translation, this approach may provide a useful framework for broader application

in the ERA initiative. Universities could each present a portfolio of performance for each field of research

in the area of excellent applied research and translation of research outcomes that articulates the impact

of their research.

5.1.1 An Input-to-Impact Approach to Articulation of Impacts of Applied Research

and Translation of Research Outcomes

Research inputs, activities and outputs can often be logically connected to usage levels and final

impacts in the community. Once the logic of the input to impact chain has been articulated (and

it must be recognised that this is often not a simple linear process and that there may be several

research feedback loops before an end impact is generated – not necessarily by the same

research groups that undertook the initial activities), the extent to which the impact statement

will be compelling depends upon the quality of evidence supporting claims relating to each link in

the chain.

6 CRCA, 2007, Impact monitoring and evaluation framework: background and assessment approaches


Feedback on the Excellence in Research for Australia Initiative, Consultation Paper CRC Association


e.g. output gain,


improvement, water

saving, health

improvement, higher

quality workforce,


e.g. training

package accessed

by users, process


implemented by


licensing of IP, etc

e.g. publications,

training package,

post-grad student


prototype, patents,


e.g. research

project, training



program, etc

e.g. cash, in-kind

support, FTE staff

years, background

IP, equipment

access, etc

Key Inputs Key Activities Key Outputs Key Usage Key Impacts

At each of these stages, the relevant performance metrics will be different; some of the key

potential indicators at each stage are listed above. It is also likely that different measures will be

more or less appropriate to each particular research group being considered.

Inputs: Measuring inputs will be essential in determining how much net benefit arises

from research. Input measures are likely to include information about the resources

committed to the project, including the people, infrastructure or facilities, financing,

and IP.

Activity: Measuring performance in research activities is likely to involve data

concerning projects, stakeholder engagement, monitoring and evaluation, training

activities and so on. Collecting data about activities will require project managers to

maintain information about appropriate performance measures. With research

projects the key measures of activity will typically be the extent to which project

milestones have been delivered upon.

Outputs: The appropriate output measure will depend on the nature of the research

question and activity. Output types may include things such as publications, patents,

trials, products, or course completions.

Usage: Measuring the ways external users apply research outputs may be more

complex than the previous stages, which involve intra-organisation data. A key point

at this stage is that the metrics of usage, and the data required, should be agreed

upon as early as possible, so that the external users are able to keep accurate

records of how research outputs are actually applied. If there has been proper

planning, it is likely to be easier to gather consistent and relevant data from external

users of the research or project outputs. Collecting information about how external

groups have adapted, applied and adopted outputs is likely to require a balance

between gathering enough information to be representative, while minimising the

coordination costs involved. One important factor will be to capture the significant

information – from the large, important or influential users – without expending

unnecessary energy on less useful sources. If the model has been used to specify the

intended impact channels, usage can be measured against this. In addition, it will be

important to gather at least some data about what other resources external users

have incorporated into their use of the project outputs. This is so that there can be a

credible attribution of the outcome to the project or program and so that benefits

can be compared to total costs.

Impact: Quantifying the final impact of research is necessarily the most uncertain of

the stages. Impact types, which may include productivity gains, industry

development, environmental, health and social benefits, and value from broadening

options, are not always easy to quantify. Many of these are highly contingent on

future contexts. As well, in a real world situation, few of the outcomes likely to be

Feedback on the Excellence in Research for Australia Initiative, Consultation Paper CRC Association

considered – productivity gains, population health improvements, environmental

benefits, etc – have only one cause. Even if it is possible to identify the other factors

or resources involved in a particular outcome, it is unlikely that exact quantification

will be viable. However, acknowledgement of the issue that other factors will likely

contribute to an outcome, and a transparent attempt to account for it, is important in

any impact evaluation.

At each of these inputs to impact stages, the focus of measurement will be different. This means

that the types of information required are different; it also suggests that the process of collecting

data, and using it in an estimation of effect, will not be the same along the whole input to impact


In selection of performance measures/indicators, it is important to consider issues of:

reliability, validity and credibility of the indicators

cost-effectiveness in terms of cost to collect and process.

It is generally preferable that SMART (Specific, Measurable, Achievable, Relevant and Timebound)

indicators are used for evaluation processes.

To ensure that the right performance measures and indicators are used, some useful checks that

need to be made are:

linking them to the logic model and the likely time frame for results

ensuring there are not too many indicators (so that too much time is spent collecting

and not enough time on analysing and using) or too few (so that important aspects

of performance are not included)

assessing the likely credibility of the measures and indicators to the intended users –

including risks to its accuracy due to differing definitions, difficulties in collecting it, or

disincentives to be accurate.

One fortunate consequence of the skewed nature of impacts from research – with a relatively

small number of research applications accounting for a disproportionate share of the total

impacts of research – is that performance portfolios for any given research field will not need to

include a large number of impacts in order to provide a reasonable sense of the overall impact

that a research group has generated. The focus in the evidence portfolios should be on clearly

articulating the major impacts that have resulted from applied research and research translation

activities rather than on exhaustively documenting all instances of impact.


The CRCA believes that it is essential during the establishment of the ERA that a clear decision is taken as

to whether assessment of excellence in applied research and the translation of research outcomes will

focus on assessing the level of ‘activity’ relating to applied research and research translation or on

assessing the ‘impacts’ of applied research and research translation.

If the focus of assessment is to be on ‘impacts’ rather than only ‘activity’, the assessment should be

primarily based on the assessment of evidence portfolios by expert reviewers rather than on the

use of volume based intermediate metrics of activity.

Use of inappropriate metrics would open up the prospect of both misrepresenting the impacts of applied

research and the translation of research outcomes and of encouraging unintended and undesirable

consequences as Universities would ‘manage to the metrics’ rather than manage towards delivery desired

final outcomes for the community.

The CRCA is pleased to offer its Guidebook entitled “Impact Monitoring and Evaluation Framework:

Background and Assessment Approaches” as a resource that the Australian Research Council may utilise

Feedback on the Excellence in Research for Australia Initiative, Consultation Paper CRC Association

in the development of the ERA. A full copy may be found on the CRCA website at :


Finally, the CRCA believes there is opportunity for the ERA to act as a measure of potential impact by

facilitating the development of strong research groups. The ERA could explicitly reward universities for

forming collaborative research teams that formally involve researchers from the private sector and from

the public sector and other universities. In this way, Australian researchers will be overtly encouraged to

form the optimum combinations of research teams drawn from wherever necessary and therein have the

optimum capacity for ultimate high quality impact.

Adobe PDF fileCRCA Submission: Excellence in research for Australia