Speech: What kind of society do we want to be?
“Every time we come to a decision point about the technologies we allow into our lives we must ask ourselves: What kind of society do we want to be?”
Dr Finkel gave the opening address to the Human Rights and Technology Conference in Sydney on Tuesday 24 July on ethics and artificial intelligence.
The full speech is below, and also available as a PDF.
A few weeks ago I took my mother and her best friend, whom I fondly call Aunty Rosa[1], both in their nineties, to Saturday lunch.
They love to hear about the state of the world and what the Chief Scientist is up to, so I decided to tell them about Artificial Intelligence.
First, I pulled out my iPhone and demonstrated how I can use Siri to place a phone call.
Then I explained that Siri was just a plaything compared to Google’s new tool, called Duplex.
Duplex will place your call, perhaps to a restaurant, or a hair salon, and then speak in a natural voice to the human who answers, to make your booking.
What sort of natural voice? they wondered.
Any voice! I said. You could stick with one of Google’s – or maybe, in the future, you could give Google your voice-print, so the voice could sound just like yours!
I told them about an email I’d received from a personal assistant named “Amy Ingram”. Initials: A.I. Artificial Intelligence.
Just think, I told my Mum, right at this minute, Amy and her brother robot, Andrew Ingram, are emailing and setting up meetings on behalf of tech-savvy people all over the world. Top executives! People in research labs and hospitals and schools and maybe even government departments!
Amy and Andrew have access to all their contact lists and diaries and emails!
Whoa! Alan. Slow down, I said to myself. I told them that there would be consequences if Amy and Andrew were hacked to reveal financial secrets and identities… or if we passed a law requiring Amy and Andrew to spy on their employers and report anything deemed suspicious.
It’s amazing, I said, how much information we’re willing to give up in exchange for a bit of convenience.
Think, for example, about all the photos we upload to Instagram and Facebook. All those photos can be used to train algorithms to recognise human faces.
And in China, this technology has taken off.
Do you know, I told them, that facial scanning in China is used for everything from dispensing toilet paper – so you can’t go back multiple times in a day – to picking out individual people in the crowd in the streets and at concerts.
In some cities in China, people are assigned what’s called a Social Credit Score. And you gain or lose points depending on your behaviour, including any bad behaviour caught on camera and then picked up by AI, like littering, or jaywalking.
If your score gets too low, you might not get a job, or a bank loan, or permission to leave the country.
And maybe, I said, we could use AI to go one step better: not just to punish the offenders, but to pre-empt the crimes.
Police and security agencies in some countries are already using AI to pinpoint the people most likely to make trouble, so they can place them under closer surveillance.
And welfare agencies are using algorithms to work out which children ought to be separated from their parents.
As I talked, Aunty Rosa grew tense. Tears welled in her eyes.
I don’t like to make my mother’s friends cry – so I asked her what was the matter.
But of course, I should have known.
Aunty Rosa was a Holocaust survivor.
For four years she lived in hiding in Lithuania, a young Jewish woman persecuted for the crime of being alive.
And as I drew my little pictures of the future, she saw only the brutal truth of the past.
A life lived in fear of being watched. By neighbours. By shopkeepers. By bogus friends.
And to this day, her fear was so overwhelming that she would not consent to let me use her real name, in sharing something of her story with you today.
She didn’t know at the time, and I’m not sure if she would want to know now, but it was data that made a crime on the scale of the Holocaust possible.
Every conceivable dataset was turned to the service of the Nazis and their cronies. Census records. Medical records. To the eternal shame of scientists, even the data from scientific studies.
With a lot of data, you need a sorting technology.
And the Nazis had one. Not computers, but their predecessor: tabulating machines using punch-cards.
Little pieces of stiff paper, with perforations in the rows and columns, marking individual characteristics like gender, age… and religion.
And that same punch-card technology that so neatly sorted humans into categories was also used to schedule the trains to the death camps.
So Aunty Rosa suffered from data plus technology in the hands of ruthless oppressors.
But she survived the war and she came to Australia. And here she found a society where people trusted in government, and in each other.
She saw the same technologies that had wrought such terrible crimes in eastern Europe used here for the collective good.
Yes, data in a humane society could be used to help people: to plan cities and run hospitals and enrol every child in school.
You could get a driver’s licence without fear. You could carry a Medicare card, and feel grateful. You could live quietly in your own house, free from surveillance, and safe.
People weren’t perfect. But for the most part they lived peacefully together, in a society governed by manners and laws, using technology to make life better.
And in that kind of society, artificial intelligence could surely be put to the service of human rights.
I think of the right to ease of travel.
What might self-driving cars mean for the elderly, or people living with disability?
I think of the right to freedom from slavery and forced labour.
Border security agencies are using AI to find the victims of human trafficking.
They can collect the images of women reported missing, and compare them to the faces of women crossing national borders, or appearing in any of the millions of advertisements posted online.
I think of the right to found a family.
Researchers based here in Sydney are using AI to improve the outcomes of IVF.
In the standard procedure, embryos are assessed by the doctors to choose which ones to implant to maximise the likelihood of a successful pregnancy.
AI can make that choice far more swiftly and reliably.
So we can spare families at least some of the trauma and expense of IVF cycles that fail.
A caring society could not possibly turn its back on all that potential.
I know that my mother and Aunty Rosa would agree.
As I told them about the power of AI, they wanted only to know that a future Australia would still be the place they had grown to cherish. Where you could be happy, and safe, and free.
“How,” Aunty Rosa asked, “will you protect me, my daughter and my granddaughter from living in a world in which we are constantly monitored?”
“How, dear Alan, will you protect our liberty?”
Aunty Rosa’s question to me is, in my words, my challenge to you.
What kind of society do we want to be?
I look around the world, and it seems to me that every country is pursuing AI its own way.
It’s true: there are some questions that we can only resolve at the level of global agreements – like the use of AI in weapons of war.
But the way that we integrate AI into our societies will be determined by the choices we make at home.
Governments decide how companies are allowed to use data. Governments decide how to invest public funds in AI development. Governments decide how they want to harness AI, for policing and healthcare and education and social security – systems that touch us all.
And that means nations like Australia have choices.
We are capable technology innovators, but we have always imported more technology than we develop. That’s inevitable, given our size.
However, that doesn’t mean we have to accept the future we’re handed by companies in China, or Europe, or the United States.
To the contrary, we can define our own future by being leaders in the field of ethics and human rights.
And that is my aspiration for Australia: to be human custodians.
In my mind, that means showing the world how an open society, a liberal democracy, and a fair-minded people can build artificial intelligence into a better way of living.
Am I asking too much? Perhaps.
But let’s not forget: we’ve been pioneers of progress, with ethics, before.
I’ve been reflecting this week on IVF.
Tomorrow, the world’s first IVF baby, Louise Brown, will celebrate her fortieth birthday.
It’s fascinating now to look back at all the things that were written and said when she arrived.
People thought that it was unnatural. That the babies would be deformed or somehow less than fully human. Or that we would start making humans in batch lots, in factories.
But here in Australia we listened to the patients and the clinicians who saw the real promise of this technology.
No-one could hand us a readymade rule-book. There wasn’t one. So we had to create one. And we did.
We were the first country to collate and report on birth outcomes through IVF.
We built a regulatory model that kept our clinics at the leading edge of the science, whilst keeping their patients safe.
We published the first national ethical guidelines on IVF, anywhere in the world.
We harnessed the Medicare system to help families to meet the costs – and clinics worked closely together, so that success rates improved steadily, right across the country.
And so IVF became a mainstream procedure, getting better over time.
There are lessons here for the approach we take to AI.
The first and most important: don’t expect a single answer or a one-shot, set-and-forget AI law.
That wasn’t the secret to adopting IVF.
No: we had a spectrum of approaches that worked together and evolved in line with the state of the technology, and the levels of comfort in the community.
There were laws and regulations, there were industry codes and practices, and there were social norms.
We will need to develop a similar spectrum of responses to AI – so that we can strike the balance between opportunity and risk.
I’ve been thinking in particular about the low-risk end of the spectrum.
By this I mean products like smartphone apps and digital home assistants that promise to make your life a bit easier.
What if we had a recognised mark for ethical technology vendors: like the Fairtrade stamp for ethical suppliers?
In my mind, it’s called the Turing Certificate.
The standards would be developed by a designated expert body, in close consultation with consumer groups and industry.
Then companies that wanted to display the mark would submit both the specific product and their company processes for an ethics audit, by an independent auditor.
So you as a consumer could put your purchasing power behind ethical developers – and developers would know what they need to do to make the ethical products that people want.
This could be an idea that Australia could pilot and help to spread.
But I emphasise: this voluntary system would be suitable only for low-risk consumer technologies.
What about technologies that touch more directly on our freedom and safety?
Where else could Australia be influential?
I point you to the public sector.
We have a cohort of leaders right across government squaring up to their responsibilities as AI adopters and human custodians.
Just last week, the secretary of the Department of Home Affairs, Michael Pezzullo, went on the record with his agency’s approach to AI.
And he went further, proposing a line in the sand not just for border security but for every decision made in government that touches on a person’s fundamental rights.
He called it “the Golden Rule”.
No robot or artificial intelligence system should ever take away someone’s right, privilege or entitlement in a way that can’t ultimately be linked back to an accountable human decision-maker.
To me, this Golden Rule is a partial answer to my question. It is the mark of a public sector fit to be an ethical custodian. And I know, from my conversations with leaders in many agencies, that they are looking to the Australian Human Rights Commission to help them interpret that custodianship.
Today we are launching a three year process to consider these issues. To identify the manners, ethics and protections that will work for all of us, not just the early adopters.
I applaud the initiative of Human Rights Commissioner Ed Santow and his colleagues.
We must all be involved in this national discussion.
And every time we come to a decision point about the technologies we allow into our lives we must ask ourselves:
What kind of society do we want to be?
To start, let’s be a society that never forgets to ask that question.
[1] Not her real name.