Skip to content
Information Technology

Australia’s AI Safety is at Risk

Sopcial Cyber Group 4 mins read

Australia’s AI Week from 24-30 November came with a significant Ministerial announcement on 25 November: a plan to create a national AI Safety Institute by the federal government.  The country faces a very hard slog to help this institute deliver AI security.

 

The Social Cyber Group (SCG) welcomes this overdue step by the Australian government but warned that the emerging global picture of AI use is one of increasing threats and escalating risks. At the very moment governments are racing to build AI safety regimes, the firms at the centre of the ecosystem are not consistently turning safety principles into practice.

 

A co-founder of SCG, Professor Greg Austin, pointed to new research from the United States to underline the scale of the challenge. Work by US AI Safety Institute researcher Kevin Klyman has examined how well 15 major tech companies implemented the voluntary AI safety commitments they made to the White House in 2023. The findings are sobering. Most companies did not fully honour their commitments, with particularly weak follow‑through on testing for extreme risks, strengthening security, and enabling independent scrutiny. 

 

“This gap between rhetoric and implementation is now central to Australia’s own

‘technology crisis’ which the new AI Safety Institute must address”, Austin said.  

 

Australia was a member of a small group of close allies supporting the launch the International Network of AI Safety Institutes in San Francisco on 21 November 2024. Other countries and entities involved were Canada, the European Commission, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States

 

On 20 November, the US chipmaker NVIDIA enjoined all staff globally to use AI tools as part of their work. The CEO, Jensen Huang, called for all employees to treat AI as the start point for every task, suggesting it would be “insane” not to do so. His comments provoked some public disquiet and employee concern.

 

Yet in Australia, public policy has only just begun to catch up with such radically expanding use. The new AI Safety Institute will be tasked with evaluating emerging capabilities and advising on risk. It will have to swim against strong tides of commercial momentum, technical complexity and limited domestic oversight capacity.

 

A particular weak point is Australia’s under‑investment in AI education. Since 2019, the country has issued high‑level AI ethics principles and voluntary safety standards, but it has not matched these with a serious national program to build AI literacy and technical capability at scale. Universities, vocational providers and professional‑education organisations are only now starting to design short, intensive programs that could help public servants, regulators and industry leaders quickly understand and manage AI risks. Without a major uplift in education and training, the formal architecture of an AI safety regime will sit on modest foundations.

 

“The business commitment to AI safety in Australia is largely rhetorical, Austin said. Most Australian businesses are experimenting with AI, but few have put in place robust internal governance, testing and audit processes that match the pace at which tools are being rolled out. The new AI Safety Institute has been framed as an “expert hub” and an adviser to regulators, rather than an enforcement body armed with strong statutory powers.

 

One tool that could help close this gap is Technology Impact Assessment (TIA) – structured assessments that examine how digital systems, including AI, affect people, institutions and critical infrastructure. The Australian Government has begun to promote AI impact assessments in its own operations, requiring agencies to use an AI Impact Assessment tool and related assurance frameworks when deploying sensitive new systems. New South Wales has gone further with a mandatory AI assessment framework for state projects. However, these efforts remain fragmented and limited largely to government. There is, as yet, no economy‑wide requirement for high‑risk private‑sector AI systems to undergo rigorous, independent assessment before or during deployment.

 

Researchers associated with Social Cyber Group and its related Institute argue that this must change. Professor Glenn Withers AO of the Crawford School at the Australian National University  was co‑leader of a government‑funded project on TIA with Indian partners. “Our research with Indian colleagues on TIA, funded by the Australian government, suggests that governments need to commit more heavily to this tool as a necessary response to AI pressures”, Withers said. “In my view, systematic and regular assessment according to rigorous principles is the only way to connect abstract safety principles with the messy realities of AI systems embedded in workplaces, markets and public services.” Withers is a cofounder of SCG.

 

Withers also points to the slowness of Australian government to follow up on overseas learnings, as seen in SCG hosting a UK and US expert delegation visit to Australia in March this year which is now promoting a joint AUKUS cyber and AI education initiative with little effective response from the government. 

 

For Lisa Materano, the CEO of Blended Learning International and a member of the same research team from the Social Cyber Institute (SCI), the priority is workforce capability. She is calling for “rapid escalation by Australia of investment in AI education, especially through professional education, such as one-day or week-long courses”.

 

“Short, targeted programs for managers, regulators and technical staff could give Australia’s institutions a fighting chance to keep pace with AI advances and to use tools like TIA effectively”,  Materano said. She is also a co-founder of SCG and SCI, and the related Social Cyber and Tech Academy.

 

The Social Cyber Group very much welcomes the announcement of the AI Safety Institute as a necessary first step but emphasises that such institutions alone will not solve Australia’s AI safety problem. Without enforceable obligations on high‑risk systems, strong incentives for firms to follow through on their commitments, and a substantial uplift in AI education, the new Institute risks becoming symbolic rather than a driver of real change.


About us:

The mission of SCG is to help businesses, governments and community organisations  avoid or minimise potentially high costs of cyber crises. Its work is based on the principle that each organisation has unique social characteristics that shape its security outcomes. SCG helps enterprise leaders understand the social DNA of their information technology and rewire it for sustainable and superior risk management.


Contact details:

Greg Austin +61450190323 (available in Delhi until 30 November 3pm)

Glenn Withers +61 416 249 350

Lisa Materano +61 438 134 558 

More from this category

  • Information Technology
  • 12/12/2025
  • 08:11
Datavault AI Inc.

Datavault AI Inc. (NASDAQ: DVLT) Announces a Distribution Date of Dec. 24, 2025, for the Dream Bowl Meme Coin Tokens to All Eligible Record Equity Holders of Datavault AI and Holders of Common Stock of Scilex Holding Company

PHILADELPHIA, Dec. 11, 2025 (GLOBE NEWSWIRE) -- via IBN-- Datavault AI Inc. (NASDAQ: DVLT) (“Datavault AI” or the “Company”), a leader in data monetization, credentialing, and digital engagement technologies, today announced that its board of directors (the “Datavault Board”) has set Dec. 24, 2025, as the distribution date for the Dream Bowl 2026 Meme Coin token (the “Meme Coin”) to all eligible record equityholders of Datavault AI. Dec. 24, 2025, will also be the distribution date for Datavault AI’s voluntary distribution of Meme Coins to record holders of common stock of Scilex Holding Company (NASDAQ: SCLX), which is being made…

  • Information Technology
  • 12/12/2025
  • 05:26
Denodo Technologies Inc. ("Denodo")

Denodo Named a Leader in the 2025 Gartner® Magic Quadrant(TM) for Data Integration Tools for Six Consecutive Years

Denodo believes this recognition is due to the strength of its AI capabilities and the loyalty of its diverse customer basePALO ALTO, Calif., Dec. 11, 2025 (GLOBE NEWSWIRE) -- Denodo, a leader in data management, today announced that Gartner® has positioned the Company as a Leader for the sixth consecutive year in its 2025 Magic Quadrant for Data Integration Tools. “Data integration tools remain a fundamental architectural component as organizations increasingly seek improved capabilities to support their operational, analytical and AI use cases,” states Gartner. “This research helps data and analytics leaders make their decisions by analyzing 20 vendors in…

  • Information Technology
  • 11/12/2025
  • 21:11
Patton Electronics Co.

Patton Honored with Gold-Level Innovators Award

Cabling Installation & Maintenance has recognized Patton’s CopperLink® CL-SFP Ethernet Extender as among the structured cabling industry's most innovative cabling and communications technology products for 2025.CopperLink®... Going the Distance!“I would like to congratulate Patton on their gold-level honoree status.”Patrick McLaughlinChief EditorCabling Installation & MaintenanceGAITHERSBURG, Md., Dec. 11, 2025 (GLOBE NEWSWIRE) -- Patton—world-renowned US manufacturer of networking and communications technology—announced today that its CopperLink® CL-SFP “world’s smallest” Ethernet Extender has won the 2025 Innovators Award from Cabling Installation & Maintenance Magazine.The CL-SFP Ethernet Extender is celebrated among the most innovative products introduced in the year 2025.Judges. An esteemed and experienced panel of…

Media Outreach made fast, easy, simple.

Feature your press release on Medianet's News Hub every time you distribute with Medianet. Pay per release or save with a subscription.