Skip to content
Information Technology, Political

AI Experts Call for National Safety Institute

Australians for AI Safety 3 mins read

Testimony before the Senate Committee on Adopting AI focused on the need for Australia to create an AI Safety Institute.

 

Polling from the Lowy Institute shows that more than half of Australians think that the risks of AI outweigh its benefits.

 

Mr Greg Sadler, spokesperson for Australians for AI Safety, said: “The Government will fail to achieve its economic ambitions from AI unless it can satisfy Australians that it's working to make AI safe.”

 

Many countries, including the US, UK, Canada, Japan, Korea and Singapore have moved to create AI safety institutes to progress technical efforts to make sure that next-generation AI models are safe.

 

On 24 May 2024, participants in the Seoul Declaration on AI Safety – including Australia –  committed to “create or expand Al safety institutes”. Minister Husic has not said how Australia will approach the issue.

 

Senator Pocock expressed his concern that Minister Husic was creating temporary expert advisory bodies, but hasn’t taken steps to create an enduring AI Safety Institute.

 

After hearing evidence about the funding Canada and the UK provide to their AI Safety Institutes, Mr Pocock said “That seems very doable to me”.

 

Microsoft suggested that Australia was at risk of falling behind other countries, like Canada, the UK, and the US who have already created their own safety institutes.

 

A recent report found that the global AI assurance industry – companies that work to ensure AI is safe – could be worth USD $276 billion by the end of the decade.

 

Mr Lee Hickin, AI Technology and Policy Lead for Microsoft Asia, said:

 

“What I see developing globally is the establishment of AI Safety Institutes. UK, US, Japan, Korea, and obviously the opportunity exists for Australia to also participate in that safety institute network which has a very clear focus of investing in learning, development and skills.”

 

“There is not just a need, but a value, to Australians and Australian business and Australian industry to have Australia represented on that global stage. “

 

“Australia has some world-leading researchers and capability.”

 

Mr Soroush Pour, CEO of Harmony Intelligence, also gave testimony at the recent hearing. Mr Pour said:

 

“The next generation of AI models could pose grave risks to public safety. Australian businesses and researchers have world-leading skills but receive far too little support from the Australian government. If Australia urgently created an AI Safety Institute, it would help create a powerful new export industry and make Australia relevant on the global stage. If Government fails to do the work necessary to take these risks off the table, the outcomes could be catastrophic.”

 

Research from the University of Queensland found that 80% of Australians think AI risk is a global priority. When asked what the Australian government’s focus should be when it comes to AI, most respondents said “preventing dangerous and catastrophic outcomes”.

 

More than 40 Australian AI experts made a joint submission to the Inquiry. The submission from Australians for AI Safety calls for the creation of an AI Safety Institute. The experts said

 

“Australia has yet to position itself to learn from and contribute to growing global efforts. To achieve the economic and social benefits that AI promises, we need to be active in global action to ensure the safety of AI systems that approach or surpass human-level capabilities.”

 

“Too often, lessons are learned only after something goes wrong. With AI systems that might approach or surpass human-level capabilities, we cannot afford for that to be the case.”

 

The full letter is available at AustraliansForAISafety.com.au.

Contact: Greg Sadler
Email:
greg@australiansforaisafety.com.au
Phone: 0401 534 879

 

 

 

Greg Sadler is the spokesperson for Australians for AI Safety and the CEO of Good Ancestors Policy. Good Ancestors is a charity that develops and advocates for policies that aim to solve this century’s most challenging problems. Good Ancestors is one of Australia’s leading nonprofits focused on the safety of artificial intelligence. Learn more at: https://www.goodancestors.org.au/

 

Soroush Pour is CEO of technology startup Harmony Intelligence. Harmony Intelligence works with AI labs and governments to help measure & mitigate societal-scale risks of cutting-edge AI systems, such as election manipulation, cyberattacks, bioterrorism, and loss of control. Their current work focuses on model evaluation and red-teaming of frontier AI models, with team members across Australia & the US. Learn more at: https://www.harmonyintelligence.com/

 

 

Sources:

UQ:

Lowy:

Microsoft Testimony (starts 5:01:30):

Testimony from Mr Sadler and Mr Pour (starts 5:28:45):

Report on the AI Assurance Industry:

Media

More from this category

  • Information Technology
  • 15/11/2024
  • 01:56
Mavenir Systems, Inc.

Tu Atea and Mavenir Partner to Pioneer Future of Networks Using Maori Spectrum Assets

RICHARDSON, Texas, Nov. 14, 2024 (GLOBE NEWSWIRE) -- Mavenir, the cloud-native network infrastructure provider building the future of networks, today announces a transformative partnership with Tū Ātea Limited, an Indigenous Aotearoa/New Zealand company dedicated to advancing Māori telecommunications. Through this collaboration, Tū Ātea will deploy Mavenir’s state-of-the-art 5G Converged Packet Core and Open Radio Access Networks (Open RAN) using Māori-owned spectrum assets. The partnership will drive the development of 4G and 5G small cell private networks and infrastructure-sharing across Aotearoa.By facilitating neutral hosting, Tū Ātea and Mavenir provide New Zealand’s network operators with shared, cost-effective infrastructure options to expand coverage…

  • Information Technology
  • 14/11/2024
  • 18:10
Cloudera, Inc.

Cloudera Unveils New AI Assistant to Help Supercharge Efficiency for Data Practitioners

Cloudera Copilot for Cloudera AI accelerates productivity, enhances collaboration, and supports continuous learning to improve project outcomes for data scientists, engineers, and developersSANTA CLARA, Calif. and PARIS, Nov. 14, 2024 (GLOBE NEWSWIRE) -- Cloudera, the only true hybrid platform for data, analytics, and AI, today announced at EVOLVE24 Paris, Cloudera Copilot for Cloudera AI, introducing secure and intelligent assistant capabilities that help to supercharge productivity and streamline data workflows. Built to meet the needs of data scientists, engineers, and developers, Cloudera Copilot improves reproducibility across projects, ultimately helping to enable enterprises to get trusted data, analytics, and AI applications into…

  • Information Technology
  • 14/11/2024
  • 17:40
Cloudera, Inc.

Cloudera to Acquire Octopai’s Platform to Deliver Trusted Data Across the Entire Hybrid Cloud Data Estate

Octopai’s best-in-class data lineage and catalog platform to provide discoverability, quality, and governance for enterprises to enhance data-driven decision makingSANTA CLARA, Calif., Nov. 14, 2024 (GLOBE NEWSWIRE) -- Cloudera, the only true hybrid platform for data, analytics, and AI, announced that it entered into a definitive agreement with Octopai B.I. Ltd. (Octopai) to acquire Octopai’s data lineage and catalog platform that enables organizations to understand and govern their data. The transaction will significantly add to Cloudera’s data catalog and metadata management capabilities.Enterprises are under increasing pressure to incorporate data-driven decision-making into their business operations. They want to utilize their data…

Media Outreach made fast, easy, simple.

Feature your press release on Medianet's News Hub every time you distribute with Medianet. Pay per release or save with a subscription.