AI: Can we get it right please?

When was the last time you read the terms and conditions before clicking “accept”? The amount of personal information we are giving up in the digital sphere is astonishing, and will only increase with our ever-higher reliance on artificial intelligence as it takes off in self-driving cars, automated medicine, and security and law enforcement.

In a speech to an event organised by the Institute of Public Administration Australia and the Australian Council of Learned Academies on August 18, Australia’s Chief Scientist, Dr Alan Finkel, emphasised the need for strong regulation to protect privacy, safety and freedom.

He asked, what will it take for you to trust AI? To allow it to drive your car? To monitor your child? To analyse your brain scan and direct surgical instruments to extract your tumour? To spy on a crowd and zero in on the perpetrator of a robbery?

Last year, Dr Finkel launched the Australian Council of Learned Academies’ Horizon Scanning report on AI for the National Science and Technology Council.  It pointed to the crucial importance of preserving society’s values as we set standards for AI, to ensure public confidence.

As we consider how far we are willing to extend our trust to AI, Dr Finkel called for consequences for AI developers that fall short of standards and to this end has canvassed several times the idea of an AI “trust mark”. The certification, which he has dubbed the Turing Certificate, would identify ethical AI, working a little like the “Fairtrade” label.

He also highlighted the opportunity that Australia has to become a global leader in developing ethical frameworks for AI, to ensure it does not limit our freedom and our safety. Legislation, guidelines and ethical behaviour are part of the equation.

The other part Dr Finkel anticipates is powerful future technology to give us more control, by bringing much more of the processing power into individual phones and other devices in the home. This will limit the information being fed to the cloud and will protect privacy: AI on your phone, not in the cloud.

Last year, the Australian Government released a draft set of principles aimed at reducing the risk of negative impacts from AI products and encourage best ethical practices during development and use. The Human Rights Commission also has a three-year project on human rights and digital technology.

Dr Finkel has spoken extensively on AI during his term, including addresses to the Human Rights and Technology Conference in July 2018 (What Kind Of Society Do We Want To Be?), National Manufacturing Week in May 2019 (What Manufacturing Can Teach AI), a Group of Eight summit on collaboration and commercialisation in October 2019 (Harnessing the Power of Artificial Intelligence to Benefit All), and at the launch of the University of Melbourne's Centre for Artificial Intelligence and Digital Ethics in April 2020 (AI on my Device, not in the Cloud).