Building public trust in a landscape of artificial intelligence and digital technologies
Australia’s Chief Scientist, Dr Cathy Foley, has addressed the crucial need to build public trust as artificial intelligence plays an increasing role in day-to-day transactions.
Dr Foley delivered the Ninian Stephen Law Program Oration at the University of Melbourne on 21 October, launching a Centre for Artificial Intelligence and Digital Ethics initiative that focuses on the challenges of emerging technologies.
She outlined the enormous opportunities that digital technologies represent for Australia, including the role of artificial intelligence and machine learning in sectors such as medicine, as demonstrated in the pandemic, and the enormous potential of the new quantum technologies.
“In my role as Chief Scientist, I’m taking every opportunity to urge Australian policymakers, educators, and industry leaders to embrace the new digital revolution and stay with the leading pack,” she told the audience.
But she warned that Australia must be clear about the points of vulnerability as artificial intelligence and machine learning are used in ever-increasing aspects of our lives.
“It’s not about turning our back on digital technologies. It’s about embracing them and engaging with them in a really active and sophisticated way,” she said, highlighting the need for robust systems to deal with bias and error.
“When AI is used to model and predict our behaviour, and then is used to make decisions about the way we are treated – whether that be employment decisions, or banking decisions, or other areas of our lives, we need to tread carefully indeed.”
Mistakes had been made in the arena of social media, which was widely adopted ahead of the development of checks and balances.
“Social media and the current generation of mobile applications have given us a taste of the dangers. But the new digital technologies, AI, machine learning, and quantum, will amplify the risks.”
The data used in AI algorithms can be incomplete, based on limited and historical information. This presents a risk of bias and discrimination, such as when AI systems use historical datasets for modelling the creditworthiness of customers. Women, Indigenous people and young people are most likely to bear the brunt of the built-in biases.
Dr Foley said the solutions are by no means simple, but will be most effective if they are framed within a clear set of principles.
She highlighted three principles:
- Diversity in the digital workforce as the first step to creating a digital realm that reflects the full human diversity.
- Transparency in the algorithms and data that underpins AI, and the situations in which is it deployed.
- Accountability in the use of AI.
Dr Foley said the conversation is underway at multiple levels, with momentum here and overseas. The Australian Government has released a Digital Economy Strategy and a National AI Centre is being established to coordinate expertise and address barriers.
The Government’s AI Action Plan, supported by the AI Ethics Framework, will help guide businesses and governments to responsibly design, develop and implement artificial intelligence. The Australian Human Rights Commission’s recent report offers a number of recommendations, including establishing an AI Safety Commissioner to provide technical expertise and build public trust in the safe use of AI.
She congratulated the University of Melbourne’s Centre for Artificial Intelligence and Digital Ethics for its cross-disciplinary approach to emerging technologies, which will bring together technical, legal and social expertise.
“I hope this program will inspire and convene more important, inclusive conversations to help our law and policymakers and our community more widely understand AI to a sophisticated level …
“Engagement, consultation and ongoing communication with the public about AI will be essential for building community awareness. Public trust is critical to enable acceptance and uptake of the technology.”