Collective actions to strengthen research integrity

"Like tendrils, metrics wind their way right through the research ecosystem – from hiring and promotion, to funding and to publication. They get a strong grip. I think of it like a climbing wisteria that I once did battle with. Once those tendrils take hold of the pergola, they can squeeze the life out of it."

Dr Finkel delivered the Opening Remarks for Clarivate's inaugural research integrity event on Wednesday 25 November 2020, which considered some of the challenges and opportunities faced by the respective sectors in building integrity within in the research system.

Entitled "Collective actions to strengthen research integrity", his remarks are below.

***************************

Thank you very much Nadita and Martin and your colleagues at Clarivate, for inviting me to make these opening remarks.

It is wonderful to see an organisation like Clarivate demonstrating its commitment to promoting research integrity, including through this event and the report you recently launched.

You all know I’ve had plenty to say during my term as Chief Scientist about research integrity and the problem of skewed incentives – performance metrics that are all about quantity, that reward researchers for the number of publications with their name attached and the number of citations to which they can lay claim.

That alone has a cascade of effects that the Clarivate report spells out so thoroughly.

Which is why I’ve been toying with the idea of – retrospective metrics!

If the problem is that researchers play to the metrics, what if we just don’t tell them how their work is to be measured?

What if we say to the research community: “Go forth and do your work!”

“We’re still going evaluate how well you do, but … we’re going to tell you how we’re going to evaluate your work, at the end of the year.  After you have published.”

That would solve the problem, don’t you think?

***

Back in the real world.

Realistically, we take as a starting point that metrics, articulated in advance, are here to stay.

But at the same time that we acknowledge metrics as part and parcel of the research system, we must also acknowledge the capacity of metrics to take hold of and drive that system.

And not always with benign outcomes.

Like tendrils, metrics wind their way right through the research ecosystem – from hiring and promotion, to funding and to publication. They get a strong grip. I think of it like a climbing wisteria that I once did battle with. Once those tendrils take hold of the pergola, they can squeeze the life out of it.

Just as metrics can skew incentives throughout the system, Clarivate is right to point to the solution being multifaceted – involving everyone from individual researchers and institutions, funding bodies, publishers, and the reviewing and data infrastructure that surrounds publishing.

I would also agree that there is a spectrum of behaviour.

My focus hasn’t been on the nefarious end of dishonest and improper practice. We can catch those researchers and nip those practices in the bud through the kinds of technological solutions and software that Clarivate outlines. 

My concern has been with the great bulk of research, almost all of it by well-intentioned researchers. The system is a problem for them.

It is axiomatic that if numbers are what your metrics are measuring, numbers are what you’ll get.

Unscrupulous researchers will pay their way to get the numbers.

Others, ambitious but not dishonest, will try to meet the bar. This might mean by maximising opportunities to have their names on papers. Or even chopping up their results to create two papers where one might suffice. Researchers will do this not simply because it advances an individual career, but because they are doing as their institutions, the funding agencies, the journals and the metrics demand.

But you know this. We all know this.

We have little choice but to play the game we are in.

The game is what has to change.

***

Let me say, I don’t want to unwind the system.

But I do want to tame it.

My approach, being born an engineer and trained as an engineer, has always been to identify opportunities for tangible change. Things that go beyond expressions of good intent.

So the question is: What kinds of metrics will encourage the behaviour we want?

From where I sit, what matters is that the output of our researchers is of the very highest quality.

It is replicable.

It is thorough.

It has chased down the anomalies and asked questions of itself.

It has set the task at the outset, not in retrospect.

It is carried out with integrity.

***

How do we create the right incentives to get there? To encourage this model of the modern researcher?

Last year, I promoted three actions that I think would drive real change:

First, accredited research integrity training should be a prerequisite for applying for a grant.

Second, metrics should measure fewer papers. We’ve all heard and admired the Rule of 5 or the Rule of 10.  My suggestion is that we stop admiring and instead make it a strict condition that researchers can only put forward a small number of papers when they apply for grants.

Third, the only publications allowed in a grant application should be in journals that adhere to agreed publishing standards. Here there is a role for Clarivate, to share its journal evaluations with funding agencies to help them make an assessment of which journals meet the standards.

None of these three actions is novel.

What is novel is my proposal that meeting all three actions be a strict condition of applying for a grant, from the ARC, from the NH&MRC, from the MRFF and from all the funding agencies in other countries.

An administrative, check-the-box requirement.

It’s a long story that I won’t go into here, but taking a lead from an international human rights organisation that my niece and nephew run, I call this approach “follow the money”. They have followed the money to get international banks to insist on ethical practices when they lend money to development projects in Southeast Asia. Similarly, I’m proposing that we follow the money to make funding agencies accountable for some agreed, fundamental checks and balances.

The three actions I have proposed are not the entire solution, but they re-set the equation.

They re-direct the incentives.

They put some new stakes in the ground, to guide the tendrils of our research ecosystem.

I challenge the funding agencies: I have made the case for making these three actions administrative requirements. If there is there a reason it can’t happen, let us know.

***

I played a little fast and loose with the idea of retrospective metrics in my opening remarks.

Clarivate people, I note that your research report carries a sensible warning about the need to consider changes to metrics very carefully, because any set of measureable standards will shape behaviour. We want to bend behaviour in the right direction. But we don’t want to embed an alternative ecosystem that brings new weaknesses.

I support the changes Clarivate continues to make to ensure its analytics are relevant and used responsibly.

I’ve been particularly interested in their work to ensure journals indexed in the Web of Science meet minimum publication standards, set out by bodies like the Committee on Publication Ethics.

The reality is that this is an ongoing effort. New problems will arise almost as quickly as new solutions are implemented. We will have to keep ever vigilant.

And I’m confident we will. This event is only one of many around the world, tackling the shared challenge of strengthening research integrity.

I hope the information shared today will help you to identify tangible actions you can take in your organisation.

And remember, actions speak louder than words.

We have to move beyond good intentions.

With that, I wish you good luck for a productive and enjoyable conference.

May the Force be with you.