Skip to content
Information Technology, Political

AI Experts Call for National Safety Institute

Australians for AI Safety 3 mins read

Testimony before the Senate Committee on Adopting AI focused on the need for Australia to create an AI Safety Institute.

 

Polling from the Lowy Institute shows that more than half of Australians think that the risks of AI outweigh its benefits.

 

Mr Greg Sadler, spokesperson for Australians for AI Safety, said: “The Government will fail to achieve its economic ambitions from AI unless it can satisfy Australians that it's working to make AI safe.”

 

Many countries, including the US, UK, Canada, Japan, Korea and Singapore have moved to create AI safety institutes to progress technical efforts to make sure that next-generation AI models are safe.

 

On 24 May 2024, participants in the Seoul Declaration on AI Safety – including Australia –  committed to “create or expand Al safety institutes”. Minister Husic has not said how Australia will approach the issue.

 

Senator Pocock expressed his concern that Minister Husic was creating temporary expert advisory bodies, but hasn’t taken steps to create an enduring AI Safety Institute.

 

After hearing evidence about the funding Canada and the UK provide to their AI Safety Institutes, Mr Pocock said “That seems very doable to me”.

 

Microsoft suggested that Australia was at risk of falling behind other countries, like Canada, the UK, and the US who have already created their own safety institutes.

 

A recent report found that the global AI assurance industry – companies that work to ensure AI is safe – could be worth USD $276 billion by the end of the decade.

 

Mr Lee Hickin, AI Technology and Policy Lead for Microsoft Asia, said:

 

“What I see developing globally is the establishment of AI Safety Institutes. UK, US, Japan, Korea, and obviously the opportunity exists for Australia to also participate in that safety institute network which has a very clear focus of investing in learning, development and skills.”

 

“There is not just a need, but a value, to Australians and Australian business and Australian industry to have Australia represented on that global stage. “

 

“Australia has some world-leading researchers and capability.”

 

Mr Soroush Pour, CEO of Harmony Intelligence, also gave testimony at the recent hearing. Mr Pour said:

 

“The next generation of AI models could pose grave risks to public safety. Australian businesses and researchers have world-leading skills but receive far too little support from the Australian government. If Australia urgently created an AI Safety Institute, it would help create a powerful new export industry and make Australia relevant on the global stage. If Government fails to do the work necessary to take these risks off the table, the outcomes could be catastrophic.”

 

Research from the University of Queensland found that 80% of Australians think AI risk is a global priority. When asked what the Australian government’s focus should be when it comes to AI, most respondents said “preventing dangerous and catastrophic outcomes”.

 

More than 40 Australian AI experts made a joint submission to the Inquiry. The submission from Australians for AI Safety calls for the creation of an AI Safety Institute. The experts said

 

“Australia has yet to position itself to learn from and contribute to growing global efforts. To achieve the economic and social benefits that AI promises, we need to be active in global action to ensure the safety of AI systems that approach or surpass human-level capabilities.”

 

“Too often, lessons are learned only after something goes wrong. With AI systems that might approach or surpass human-level capabilities, we cannot afford for that to be the case.”

 

The full letter is available at AustraliansForAISafety.com.au.

Contact: Greg Sadler
Email:
[email protected]
Phone: 0401 534 879

 

 

 

Greg Sadler is the spokesperson for Australians for AI Safety and the CEO of Good Ancestors Policy. Good Ancestors is a charity that develops and advocates for policies that aim to solve this century’s most challenging problems. Good Ancestors is one of Australia’s leading nonprofits focused on the safety of artificial intelligence. Learn more at: https://www.goodancestors.org.au/

 

Soroush Pour is CEO of technology startup Harmony Intelligence. Harmony Intelligence works with AI labs and governments to help measure & mitigate societal-scale risks of cutting-edge AI systems, such as election manipulation, cyberattacks, bioterrorism, and loss of control. Their current work focuses on model evaluation and red-teaming of frontier AI models, with team members across Australia & the US. Learn more at: https://www.harmonyintelligence.com/

 

 

Sources:

UQ:

Lowy:

Microsoft Testimony (starts 5:01:30):

Testimony from Mr Sadler and Mr Pour (starts 5:28:45):

Report on the AI Assurance Industry:

Media

More from this category

  • CharitiesAidWelfare, Political
  • 16/01/2026
  • 13:52
ICMEC Australia

Grok, guardrails, and the cost of ‘move fast’ AI

Today, ICMEC Australia CEO, Colm Gannon has vocalised concerns following the joint press conference by the Prime Minister, Minister for Communications, and theeSafetyCommissioner - warning that Australia is entering a defining moment for the safety of children and women in technology. His new op-ed(attached here),examines the global fallout from thedeploymentof Grok, X’s generative AI chatbot, after regulators began receiving reports of non-consensualsexualisedimagery involving children and women and other exploitative outputs. Australia’seSafetyCommissionerthisweekcommenceda second investigation into X, coinciding with new mandatory online safety codes coming into force in March 2026 — a shiftMrGannon says will reshape expectations for how AI systems are…

  • Medical Health Aged Care, Political
  • 16/01/2026
  • 06:00
Monash University

National genomic screening program would save thousands of Australians from preventable cancer and heart disease

Leadinggenomic health experts from Monash University are calling for urgent government funding to progress the development of a national preventive genomic testing program that would save thousands of Australians from conditions like cancer and heart disease. This call to action follows a Monash-led nationwide pilot study recently completed, offering free genomic screening to 10,000 Australians aged 18 to 40. The pilot tested for 10 medically actionable genes linked to hereditary breast and ovarian cancer, Lynch syndrome, and familial hypercholesterolaemia. The findings are published today in the inaugural edition of the new academic journal Nature Health. The pilot study was led…

  • Information Technology, Internet
  • 14/01/2026
  • 13:42
Monash University

Monash expert: Grok blocked – should Australia follow suit? And how to safeguard images against malicious content generators

A Monash Universityexpert is available to comment on the controversial AI tool Grok, whether Australia should follow Indonesia, Malaysia and the UK in restricting it, and how to safeguard against private images being used by AI image generators. Associate Professor Abhinav Dhall, Department of Data Science & AI, Faculty of Information Technology Contact via: +61 450 501 248 or [email protected] Human-centred artificial intelligence Audio-visual deepfakes Computer vision The following can be attributed to Associate Professor Dhall: “Grok has made it easier to produce malicious content because it is directly integrated into X (formerly Twitter), so anyone can quickly tag it…

Media Outreach made fast, easy, simple.

Feature your press release on Medianet's News Hub every time you distribute with Medianet. Pay per release or save with a subscription.