Australians for AI Safety logo

Australians for AI Safety logo

Experts urge new Ministers Tim Ayres and Andrew Charlton to act on AI safeguards, as trust in AI hits record lows, risking Australia's productivity agenda.

CANBERRA, AUSTRALIA, May 14, 2025 /EINPresswire.com/ -- Today, on his first day as Australia's new Industry Minister, Senator Ayres received an open letter from hundreds of Australian AI experts, public figures, and concerned citizens. The letter, organised by Australians for AI Safety, says that action on safety is necessary for Australia to seize AI’s opportunities. The letter is also being sent to Dr Charlton, the new Assistant Minister for Science, Technology and the Digital Economy.

The experts are calling for Senator Ayres and Dr Charlton to pick up the pace on two specific issues where the outgoing Minister, Ed Husic, was slow to act. For example, in May 2024, former Minister Husic promised at the Seoul AI Summit that Australia would create an AI Safety Institute. Every Seoul-signatory except Australia has delivered on the commitment.

The AI experts also want an Australian AI Act. Former Minister Husic ran a series of consultations on safe and responsible AI, culminating in a paper about imposing mandatory guardrails on high-risk AI systems. The experts argue that now is the time for action.

Action on AI safety is necessary to achieve the Albanese Government's agenda on AI and productivity. Treasurer Jim Chalmers is working to “embrace the AI opportunity”, but a new KPMG study shows people won't use AI if they don't trust it. The study reveals that only one-third of Australians trust artificial intelligence, and 78% demand stronger government regulation. The study found that half of Australians have personally experienced or witnessed harm from AI systems.

Action on AI safety – like a safety institute and an AI Act – are pathways for Ayres and Charlton to solve the trust issue and clear the way for faster adoption.

Professor Terry Flew from the University of Sydney, who also signed the letter, said, “If people do not trust AI, they will be reluctant to use it.”

Good Ancestors’ AI 2025-2028 White Paper adds more detail to the expert letter. The paper highlights expert analysis showing that the world is in a critical window before transformative AI reshapes society. The paper warns that inaction could jeopardise AI's economic potential – amounting to billions in lost productivity – while leaving critical infrastructure exposed to AI threats.

In addition to a safety institute and an AI Act, the white paper makes other practical recommendations to build trust, like hosting the next AI summit in Australia and revising the Voluntary AI Safety Standard.

"Australians won't adopt AI without trust—and right now, there are genuine reasons for concern. Failing to address legitimate safety concerns will expose Australia to immediate risks and limit our economic competitiveness in an AI-driven world," said Greg Sadler, CEO of Good Ancestors. "The white paper highlights work by the Tech Council of Australia indicating that fostering trust could unlock billions in additional economic value."

Dr Toby Ord, Senior Researcher at Oxford University and Australians for AI Safety open letter signatory, warns of the global implications: "Australia risks being in a position where it has little say on the AI systems that will increasingly affect its future.”

The white paper also flags longer-term issues, including potential mass unemployment, countering sophisticated interference in our democracy, and preventing AI being misused to develop bioweapons.

The urgency is palpable for many. Michael Kerrison, a consultant-turned AI governance researcher, said, "Seeing advanced AI models demonstrate deceptive capabilities shortly after my child was born was a wake-up call. It’s clear government isn’t taking this seriously. I left my job to focus on AI governance because this is serious, and it's happening now. As a new parent, I find the lack of safeguards unacceptable. Government must take swift action to give Australian families a fighting chance."

"Leading AI developers are already issuing warnings that their systems are nearing the capability to assist in creating novel threats," noted Mr Sadler. "The policy decisions made by the Albanese Government in the next 12 months will be pivotal. They will determine whether Australia shapes its AI future or is controlled by it, with profound consequences for our security, economy, and society."

Tens of thousands of Australians used the Australians for AI Safety scorecard to look into the policies of candidates in the seat. This underscores that AI safety is becoming a national issue.

The Australia AI 2025-2028 White Paper is available at goodancestors.org.au/whitepaper and the Australians For AI Safety open letter is available at australiansforaisafety.org.au/letters.
About Good Ancestors: Good Ancestors is an Australian charity dedicated to improving long-term outcomes through evidence-based, practical policy recommendations.

Mr Gregory Sadler
Good Ancestors Policy
+61 401 534 879
email us here
Visit us on social media:
LinkedIn

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.



ABN Newswire
ABN Newswire This Page Viewed: