Australians for AI Safety Scorecard graphic asking 'Who will make AI safe?' It compares logos and positions on AI safety policies: The Greens shown with a green 'Yes' tick, Labor and Liberal/Nationals logos shown with orange question marks, and an 'Others

Australians for AI Safety Scorecard: See who supports expert-recommended AI policies.

Survey shows Australians' trust in AI at an all-time low; election scorecard compares parties on backing an AI Safety Institute and AI Act.

CANBERRA, AUSTRALIA, April 30, 2025 /EINPresswire.com/ -- Australians' trust in AI has hit a record low, with widespread fear about its misuse fueling urgent calls for government action as the federal election approaches, according to new research and a political scorecard released today.

A new global survey by the University of Melbourne and KPMG confirms Australians’ confidence in artificial intelligence has sunk to its lowest level on record:

- Only a third of Australians trust AI systems.

- Half of Australians have personally experienced or seen harm from AI.

- Nearly 3 in 5 fear that elections are being manipulated by AI-generated content or bots.

- More than three-quarters want stronger regulation, and less than a third think existing safeguards are adequate.

- Nine in ten support specific laws to curb AI-driven misinformation.

Australians for AI Safety has released its 2025 Federal Election AI Safety Scorecard, comparing every major party’s stance on two expert-endorsed AI policies:

- An Australian AI Safety Institute – a well-funded, independent body that can test frontier models, research risks and advise government.

- An Australian AI Act – legislation that places mandatory guardrails and clear liability on developers and deployers of high-risk and general-purpose AI.

The scorecard shows that only the Australian Greens, Animal Justice Party, Indigenous-Aboriginal Party of Australia and Trumpet of Patriots fully back both expert-endorsed policies. Senator David Pocock and other independents have also endorsed them. The Libertarian Party generally opposed the policies, referring to them as “government schemes”.

“The scorecard shows who is prepared to match rapid AI progress with equally rapid safeguards,” said Greg Sadler, Australians for AI Safety spokesperson. “Australians tell us they want leaders with a real vision for the future. If we expect that kind of vision, we need to vote with future-focused issues like AI in mind.”

The Coalition's response to the scorecard highlighted perceived government inaction: "We need to be alive to the risks associated with this technology... [T]he Albanese Labor Government has completely failed to take decisive action or provide clarity and policy direction. Holding roundtables, commissioning reports and announcing advisory bodies is not the dynamic action that is required on such a critical issue."

However, the Coalition’s response did not outline a clear position on the expert-recommended policies for an AI Safety Institute or an AI Act.

“This is exactly what policrastination looks like – one party accusing the other of inaction while not proposing action of its own," said Taylor Hawkins from Foundations for Tomorrow. "Australians are tired of politicians delaying and dodging hard decisions and putting important policies in the too-hard basket.”

"As an AI governance researcher, I know it's critical that Government gets these AI policies right to safeguard the extraordinary benefits of highly-capable, general purpose AI," said Alexander Saeri, AI governance researcher at The University of Queensland and director of the MIT AI Risk Index. "This new research is consistent with what we’ve known since at least 2020 - Australians want stronger safeguards now."

"Seeing advanced AI models demonstrate deceptive capabilities shortly after my child was born was a wake-up call. It’s clear government isn’t taking this seriously," said consultant-turned AI researcher Michael Kerrison. "I left my job to focus on AI governance because this is serious, and it's happening now. As a new parent, I find the lack of safeguards unacceptable. The major parties need to step up; government must take swift action to give Australian families a fighting chance."

Australians for AI Safety argues that robust safety regulation allowed the aviation sector to flourish. Similarly, AI innovation will only thrive once independent testing and clear statutory duties build public trust. Comparable institutes are already operating in Japan, Korea, the United Kingdom and Canada. Alongside the EU AI Act, they set a clear benchmark for Australia. Australia committed to creating an AI Safety Institute but has yet to do so.
The full party and candidate results are available at www.australiansforaisafety.com.au/scorecard. The scorecard allows voters to dive into the particular responses of major parties and expert evaluations of how they stack up.

About Australians for AI Safety

Australians for AI Safety is a civil-society initiative coordinated by the charity Good Ancestors. It promotes evidence-based policy that allows Australians to realise AI’s benefits while guarding against catastrophic risks.

Mr Gregory Sadler
Good Ancestors Policy
+61 401 534 879
email us here
Visit us on social media:
LinkedIn

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.



ABN Newswire
ABN Newswire This Page Viewed:  (Since Published: 101)