Speech: John O'Hagan Lecture 2018
“We can make something extraordinary even better. And we can do it as realists, and pragmatists – not as utopians. That is the research community at its best.”
Dr Finkel presented the John O’Hagan Lecture for the Queensland Academy of Arts and Sciences onTuesday 20 November 2018. The lecture series honours Dr John O’Hagan, who was a driving force behind the Royal Society of Queensland, the formation of the Queensland Museum Act and the first president of the Queensland Academy of Arts and Sciences.
The full speech is included below, or you can download it as a pdf.
In the course of preparing this lecture I had the opportunity to do a bit of digging on Dr John O’Hagan.
I came across the biography he wrote about the early colonial governor, Sir Thomas Brisbane. And yes, there is an Australia city named after him.
Now Sir Thomas’ reign was brief and glorious, as far as science was concerned.
From the start, there were concerns that he would make a terrible governor, because he wasn’t particularly interested in flogging convicts.
He turned up in Australia with four astronomical clocks and three telescopes – and when he was turfed out, just two years later, he left behind some 40,000 astronomical observations and a library of several hundred scientific books.
He was also the founding President and Patron of the very first attempt at a scientific society in Australia.
It was established in 1821, spent several months arguing about whether a commemorative plaque to Captain Cook should be written in English or Latin, and then disbanded in 1822.
So, on balance, not a great success.
But Dr O’Hagan clearly learned from his reading of history, because he knew in his bones that any society needs more than good intentions to get off the ground.
It needs love, and labour, and leadership.
And he gave all of them to the organisations he served, including the Queensland Academy of Arts and Science.
I am proud to present this lecture, in his honour, and in tribute to all our scholarly societies.
***
Now when I accepted the Queensland Academy’s invitation, I was given absolute freedom on the topic.
The general direction was: something involving science, and the humanities and arts.
In other words: whatever you like.
So I am left with the classic twenty-first century problem of abundant choice.
My thoughts went first to the Special Report of the Intergovernmental Panel on Climate Change, and the global challenge of containing warming to 1.5 degrees.
On any measure, it’s a staggering task for science – and a staggering task for society as well.
And to be honest, I don’t know which is more difficult: the science or the society.
The science is clear: stop using fossil fuels.
But society is also clear: we need energy.
Good as they are, solar and wind with batteries will take several decades to reach the necessary scale.
And other technologies, like carbon capture and storage, are decades from the point of impact, and geothermal electricity is proving elusive.
According to the IPCC, we don’t have much time.
There are certainly solutions that could be scaled up, such as nuclear electricity and catchment hydro… but they tend to make us uncomfortable. And so does the thought of just giving up our cars, or our air-conditioning, or our T-bone steaks.
So we have a conundrum. The solutions to a global problem aren’t acceptable at the local level. And we need to develop breakthrough technologies whilst simultaneously persuading billions of people to change the way they live.
I like a challenge, so that topic was very tempting.
But then I thought about a completely different topic: artificial intelligence, AI.
There was a fascinating snippet in last month’s news from the United States: MIT has launched a new $1 billion college for training AI developers.
And the graduates will be what MIT calls “the bilinguals of the future”.
That means students in all the disciplines – history and economics, just as much physics and maths – who are also highly capable in AI.
They’ll be humanitarian technologists – or technology-enabled humanists.
And I have to admit, that idea of a new bilingualism was also extremely tempting as my topic for tonight.
But I finally settled on the project that matters to every researcher, something at the core of our identity as scholars, and the essence of the great Enlightenment tradition we share.
And that is the task of safeguarding quality in research.
***
It’s not that academic research hasn’t delivered. To the contrary, the collective global R&D effort has delivered stunning progress for the common good.
But it could do even better if we could excise the bottom end: the poor quality, flawed research that weakens trust and wastes time.
Now I’ve been involved in the academic world for a long time – and there has always been the odd report of misdemeanours or shoddy practice.
But you’d hear those reports at conferences – or if they were really bad, they might be buried somewhere in the middle of the newspaper.
You wouldn’t see them in the headlines.
What I’ve seen in the last few years is a new intensity of concern.
And it’s not just researchers approaching me in my capacity as Chief Scientist – it’s journalists, and people in industry, and yes, every now and then… even the odd politician.
So when I heard that Sir Phillip Campbell, the Editor in Chief of the publishing group Springer Nature, was visiting Australia, I took my opportunity.
I organised a brainstorming meeting with a sample of research publishers, research institutions, and research funders, together around the table.
And everyone came to the conversation with examples fresh in their minds.
Start with medicine, and look no further than last month’s news.
A paper was published claiming to show that homeopathy works to ease pain.
It was a trial with eight rats. It relied on the researchers’ subjective assessment of whether or not each rat was actually feeling pain – and that assessment wasn’t done double blinded. Even worse, close inspection of the paper showed that some of the images were duplicated and some of the reported concentrations did not correspond with the data in the figures.
The methodology could hardly have been worse, but the paper was accepted by a journal named Scientific Reports, an open-access, peer-reviewed journal published by Nature Publishing Group.
Let’s move from homeopathy to psychology. A team of researchers tried to replicate 21 studies published over a five year period in Science and in Nature.
Only 13 could be reproduced. And even for those papers, the effect sizes – or the difference between the experimental group and the control group – was substantially smaller in the replication studies, indicating that the significance of the findings was overstated in the original papers.
So let’s look then at statistics more widely. About 400 statisticians were surveyed this year on the requests they’ve received from collaborating researchers.
170 had been asked to report results before the data had been validated.
More than 90 had been asked to remove some of the data to achieve a desired outcome.
12 had been asked to falsify the statistical significance; some of them, on more than ten occasions.
So let’s try social studies. A group of provocateurs submitted 20 hoax papers to credible journals. Seven were accepted – including a paper proposing that male university students should be required to sit on the floor in chains; and another paper that was copied, with minor changes, from a chapter of Mein Kampf.
That’s October.
Now we should be very clear that no-one at the brainstorming meeting considered these examples to be cause for despair.
There is comfort in the fact that the problems were detected, and widely reported.
And in almost every case, they were brought to light by researchers, doing exactly what their disciplines had trained them to do best. Interrogate the work of their colleagues.
In the context of a massive global effort, spanning millions of individual researchers, in thousands of individual institutions, using methods of increasing sophistication and complexity to tackle challenges on the scale of understanding the human brain…
…it would surely be ridiculous to demand perfection.
But we also agreed that it was equally unhelpful to try to isolate the problems to specific disciplines or institutions.
First, because finger-pointing just drives us into our silos.
And second, because we all pay the price of bad practice, irrespective of where it occurs.
We pay it in credibility.
We pay it in wasted dollars and time.
But most of all, we pay it in lost potential.
I think of an institution whose credibility is surely beyond question: Harvard University.
About a decade ago, it awarded a professorship to a scientist with a rock star reputation.
His research seemed to show the impossible: that you could inject stem cells into damaged hearts to regenerate the muscle.
On the face of it, it was incredible: any cardiologist would tell you that regenerating the human heart isn’t so simple.
But here was a scientist, publishing in leading journals, and now welcomed as a lab director at Harvard, claiming he could do it.
For a decade, other scientists tried to follow the trail. The grant dollars poured in. Patients were recruited into clinical trials. It looked like one of the most promising leads in our time.
But the trail went cold. And last month Harvard recommended the retraction of 31 of that lab director’s papers, for falsified and fabricated data.
It also agreed to a $10 million settlement with the US government for obtaining grants on the basis of scientific fraud.
Yes, these examples may be rare, and yes, in time, the system corrects – but just think for a moment about what a bright young postdoc in your field could do with a research dollar, well-invested.
Every lapse in the standard is an opportunity lost.
And that cost is our best motivation to do better.
On that point, everyone at our meeting was absolutely resolved.
***
So where do we intervene?
You could talk about research quality for days and days. There are researchers who’ve made it their life’s work.
When the National Academies in the United States last reported on this topic, the bibliography alone ran to twenty-three pages.
A one-day brainstorming meeting needed to start with a bit of humility.
So I approached it as an opportunity to learn about what journals and institutions are already doing, to think about what works, and to brainstorm on what might be possible.
I was pleased to discover that many publishers have already taken their own initiatives to improve the robustness of content and processes and that many institutions have implemented research integrity and quality training for students and supervisors. But so far, it is not enough.
We recognised that quality concerns cover a broad spectrum, from sloppy practice on the one hand, to outright fraud at the other.
It’s the fraud that tends to get into the headlines – and of course, every time it occurs, it is extremely concerning.
But far more common are the lapses from good practice at the other end of the spectrum.
And that part of the problem is probably much harder to solve.
Because we’re not trying to guard against megalomaniacs hell-bent on doing the wrong thing.
We’re talking about the day-to-day practices of well-intentioned people, trying like everyone else to get ahead.
Ask an economist, or a psychologist, or a historian: the learned behaviours of humans are very difficult to shift.
It would be easy if we could just assemble the postdocs once a year and tell them that quality and integrity are important.
However, it simply won’t register with them if those same postdocs see their colleagues running on the treadmill to pump out paper after paper, year after year.
If peer review is just another thankless chore.
If the consequence for acknowledging a mistake, or asking for assistance, is the academic equivalent of crucifixion.
Or if the infrastructure for managing data just isn’t sufficient.
In other words: if the message about research quality isn’t confirmed by the way we calibrate the system.
So that’s where we chose to focus: on the steps that all of us in our different positions could take, to make striving for best practice the norm.
***
And the first key message I took from the day was that leadership is key.
We were lucky to have several international participants in the room, who’ve been part of these conversations at the global level for many decades.
All of them were impressed by the level of knowledge and the dedication of the Australians in the room.
And they were keen to see Australia do more on the global stage.
So we do have something important to contribute. And we can move this conversation forward.
In particular, let me pull out three of the pathways we discussed.
***
The first is the big one: the criteria for grants and professional advancement.
How can we shift the focus from something that’s easy to count – the number of publications – to something that’s actually important to encourage?
Yes, that includes publishing high-quality papers.
But it also includes all the ways that good researchers contribute to the health of the ecosystem, and everyone involved.
For example: the personal investment they make in mentoring and peer review.
The good news is that a slow but steady shift is underway.
Some institutions and funding bodies are moving to a “Top 5” system of recognition.
Instead of listing all their publications, researchers are asked to choose just five, and explain why they are proud of their personal contribution to that research.
And maybe some of those papers aren’t widely cited – but when we sharpen our focus, we can move beyond crude metrics to a more nuanced appreciation of a researcher’s real impact.
We can also find more qualitative ways of thinking about a researcher’s contributions to the people around them.
Imagine if, instead of asking researchers to give us the raw number of PhDs and postdocs they’d supervised, we asked instead to hear about two of their students who have achieved success.
That simple practice could encourage researchers to invest more time in their junior colleagues – and to give those colleagues due credit for what they achieve.
So I am heartened to see leaders across the sector experimenting with different incentives.
And I am confident that more and more researchers will seek out those institutions that do create the nurturing environment to do truly impressive work.
***
The second dimension I want to highlight is training.
When I was a postdoc in the 1970s, my supervisor belonged to the school of “push them in the deep end and if they’re any good, they’ll swim till they reach the ladder at the shallow end”.
But he was also intensely and personally invested in the work of every single student he supervised.
He insisted on quality over quantity, and he demanded absolute rigour.
I couldn’t have received better training in the right way to do research.
But times have changed – the number of PhDs and postdocs has greatly expanded – and so too have the expectations we place on researchers.
A priority in my mind is to embed research integrity into training, so the expectations are emphasised from Day One.
Most institutions have some form of training, which could be anything from an online module to coursework for credit. It may or may not be compulsory in order to graduate, and the standards are not centrally accredited.
Some institutions also require senior researchers to be explicitly trained in how to foster good practice and culture, but again, it’s not clear how widely that requirement is implemented or how effectively the training is delivered.
We ought to consider whether research integrity training should be a requirement for receiving a grant – but we should also ensure that it’s not presented as a tick-a-box proposition.
And that might take the form of accrediting research integrity training modules, or at least highlighting good programs where they exist.
That training needs to be relevant to researchers trying to bring new techniques into their fields, whilst still maintaining rigorous quality standards.
A point raised by many in our meeting was the particular challenge of raising the baseline level of comfort with data and statistics.
I’m going to give a speech tomorrow about my concerns about the approach we take in this country to mathematics.
In particular, all the perverse incentives that lead students to abandon high-level maths in Year 11 and 12.
But I won’t continue down that path tonight.
And I’m certainly not insisting that every researcher needs to be an expert.
What I do believe, however, is that every researcher ought to be able to usefully engage: so they can harness their own data appropriately, and interrogate the claims that others might make.
And the system shouldn’t just enable it – it should require it.
***
And that leads neatly into the third dimension we discussed: knowledge sharing.
Sharing knowledge is integral to research.
It was the need to share knowledge that led to the creation of the Royal Society, and the European Learned Academies, and every other scholarly community that followed.
It was taken so seriously by Sir Thomas Brisbane and his scientific society in colonial Australia that members were fined ten pounds sterling if they failed to present a paper when called upon to do so.
Now that was in the age when observations were recorded on paper.
Good data management meant writing neatly in the columns.
If you wanted a copy, you fetched a pen.
The expectations are very different today.
I think of the Human Genome Project, a massive global undertaking only made possible by the open access data platform, GenBank.
It is a global repository of all publicly available nucleotide sequences and their protein translations, now spanning some 100,000 organisms. Currently it contains 280 billion nucleotide bases in 210 million sequences. And it doubles in size every 18 months, with contributions from laboratories all over the world.
But big data isn’t just a matter for scientists – increasingly, it’s finding applications across the humanities as well.
Look at what can be learned about human beings through analysing Google search patterns – or Instagram posts.
New research technologies present us with new expectations – for researchers, to apply them appropriately; for their colleagues, to understand and critique their work; and for institutions, to provide the necessary support.
And that takes investment in infrastructure, and skills.
Now I understand that resources are tight – particularly for PhDs and postdocs.
And they might not be surprised to discover that their institution will give them little more than a library password and a USB stick.
But that can have consequences – as I was reminded at our brainstorming meeting.
We heard the story of one PhD researcher who was accused of research misconduct because the USB stick he was using was lost. But it became clear that the institution hadn’t provided anywhere to store the data safely.
How many more PhDs set out in the belief that good data management is nice if you can afford it, and negotiable if you can’t?
The explicit message about research quality has to be backed by the implicit message from the system.
And for me, making that case for investment from the sector and the government in the infrastructure for knowledge sharing will remain a priority.
***
So across all three of those dimensions – recognition, training and knowledge sharing – I came away from our discussion with a sense of opportunity.
We can make something extraordinary even better.
And we can do it as realists, and pragmatists – not as utopians.
That is the research community at its best.
And even Sir Thomas Brisbane, and all the quarrelling members of his short-lived scientific society, might be persuaded to agree.
May we never cease to argue, as long as we do it in constructive ways.
And to all our researchers, whether they’re in the library, or the lab, or perhaps by now in the university bar… May the Force be with you.
THANK YOU.