In March 2018, all Australian universities will be submitting key performance indicators (KPIs) on their engagement and impact to the Australian Research Council. These measure how well universities engage with the non-academic community and what impact they have had on society over the last five years.
An aim to encourage research to have more impact on society is good, but not all impacts need be measured.
The Australian government is now reviewing the guidelines for these KPIs. Now is the time to get it right.
Why do we use KPIs?
There are certain times when it’s appropriate for governments to impose KPIs on external organisations.
In the private sector there’s a market that shows when businesses have made the goods in the quantity and quality that the consumer wants. Unfettered purchase is proof enough the right product has been delivered at lowest cost. However, this automatic system of proof, breaks down when the purchaser is not the consumer.
The separation between the purchaser and consumer is common in government services for a number of reasons. For example in many services, such as library services or rubbish collection, the government purchases the services of workers and materials, but the residents consume the services.
The unhappy consumer has no recourse other than to complain (whereas in the private sector they could take their money to another provider). To avoid this ad hoc complaints-based system, and to mimic the best features of the market, the government may set KPIs (such as books borrowed or spilled rubbish) to ensure the producer is acting in the consumers’ best interests.
In the university sector, the student is, for many courses, both the purchaser and the consumer of teaching services (hence KPIs probably not needed), but this is generally not the case for research.
Grant aimed at solving agricultural water issues through community engagement | Penn State University https://t.co/hhU2DWm56A
— James Forson 🐀 (@JamesFor) May 23, 2017
The purchaser of research is often a government and the consumer is society. Taxpayer funds are handed out to academics based on the judgement of expert committees.
To date, aside from mechanically monitoring “progress”, there has been little oversight by the government on how well these funds are used and what their effect has been. Has society-at-large benefited? No one has yet been able to answer this.
Although achieving societal impact is rarely promised in grant applications, KPIs are a good way to start changing university research culture. Marks out of 10 is something academics understand.
Who sets the KPI?
Not all university research uses public funds. Some research is bought directly by the consumer (public organisations or businesses), as collaborative or contract research.
This form of research doesn’t need an artificial government-driven KPI. Similarly, governments may allocate funds via a matching model, so their portion is at least driven by one consumer (the industry collaborator).
Other research is funded through philanthropic bodies or universities’ own endowments. Again, the researcher here is answerable to the trust fund rules, not the government.
However, there remain considerable research funds (perhaps half) provided through the government that demand some form of public accountability. If not to improve the allocative efficiency and change academics’ behaviour, but also to assure the public their taxes are used for community benefit. Therefore KPIs have a role for this portion of research.
How KPIs could be changed
If the government wants to encourage universities to translate their research, they should explicitly pay for it. Engagement and impact are expensive activities and to date, academics who have contributed, have often done so in contradiction to the push by their heads of department to only publish in research journals.
The KPIs should be relative to the quantity of research funding received from the government.
There’s a well-known dictum that says, whenever a measure becomes a performance metric, it ceases to be a good measure.
When it comes to universities collaborating with businesses, there is some use and abuse of patent applications as an indicator of reaching KPIs. For example, a KPI introduced in China based on patents has meant the quality of patent applications is lower.
There are good arguments why the business (not the university) should own the patent arising from a collaboration. The owner has residual control rights which are important when unforeseeable contingencies are likely to arise. If these contingencies are largely commercial, then the best party to control residual rights is the commercial entity.
And if all this is correct, then university patent applications should not be a metric at all. A patent KPI could reduce research translation and make universities’ IP departments even more difficult for industry to deal with.
The findings on university support may be sobering
If these KPIs aim to find out how the university as an organisation has encouraged their staff to engage and deliver impact, the results might be sobering. Apart from providing a desk, computer and the imprimatur of an academic title, it would be interesting to see what additional support most universities offer.
It’s my guess that most non-academic staff are spending their time responding to bureaucratic requests from the government, other compliance agencies and international ratings bodies (or finance, HR, legal etc matters). Few are probably assigned to aid engagement and impact.