Skip to content
Information Technology, National News Current Affairs

Claiming AI can lead to human extinction is an overreaction: RMIT AI experts

RMIT University 4 mins read

Professor Matt Duckham, Professor in Geospatial Sciences. 

Topics: AI, overreaction, disruption, discrimination.  

“The statement could be viewed as an astonishing overreaction, and something I expect its signatories will look back on sheepishly in the coming month and years.  

“To compare this technology with the truly existential threats we face today such as climate change, war, pandemics, is simply absurd.  

“No matter how surprising or remarkable the new AI capabilities are, this technology is just statistical models of word frequencies. 

“The technology nevertheless marks a remarkable and exciting milestone in AI. It is surely causing big changes and disruptions in many industries and sectors of society, which are only set to grow over the next few years.  

“Many of those disruptions will be negative, but hopefully many more will be positive. None will be apocalyptic though.  

“The real harm of this technology lies in their subtle amplification of discrimination, inequity, exploitation, and entrenched advantage – which are already evident and growing in the use of AI in industry, institutions, and society for some time.” 

Professor Matt Duckham is a Professor in Geospatial Sciences, with expertise in spatial computing, geographic information science and geo-visualisation.  

Professor Lisa Given, Enabling Impact Platform Director, Social Change, Research and Innovation Capability 

Topics: AI panic, AI tools, risks and harms. 

“'The risk of extinction from AI is highly speculative and does not compare to the real and immediate global risks humanity faces, such as climate change and the COVID-19 pandemic – real and tangible concerns that governments need to address globally. 

“When institutions issue warning statements like this, they risk creating unnecessary panic about future, potential technologies that may never materialise.  

“They also focus people’s attention away from the real risks posed by AI tools today (such as misinformation, bias, lack of transparency, potential for abuse), which are causing real harm. 

“The public has been exploring daily the usefulness of AI tools yet are often unaware of the real limitations of these systems and the risks of adopting these emerging technologies. 

“Tools that use copyrighted materials without consent, that present false information using a convincing and empathetic tone of voice, that pre-screen job applicants against biased datasets, and that enable image-based abuse – are just some of the real harms we are seeing today. This is where regulation, transparency, and scrutiny are needed urgently. 

“AI tools have many benefits to offer humanity, but we need to be critical and careful about how these tools are used for the betterment of society.  

“This requires us to question who has control of these tools, what people and companies may gain (or lose) from how they are used, and what steps we need to take, as a society, to ensure people and companies use these tools appropriately.” 

Professor Lisa Given is Professor of Information Sciences and Director of RMIT’s Social Change Enabling Impact Platform. Her research examines people’s use of technology tools for decision-making in business contexts and everyday life. 

Professor Matthew Warren, Director, Centre for Cyber Security Research and Innovation  

Topics: cyber security, autonomous weapons, government regulation. 

“Doomsday scenarios on AI are nothing new. Remember in 2014, Stephen Hawking warned AI could end mankind? Or in 2014 when Elon Musk warned that, with artificial intelligence, we are summoning the demon?  

“Now the Center for AI Safety claims AI systems could lead to human extinction, it is all hype that takes away from real challenges from global warming, pollution, famine, and more.  

“Will these generative AI models end humanity? No.  

“AI systems should be embraced as they will help improve the societies in many ways and in ways we do not even understand now. 

“What they will have a negative impact on society is widespread job losses, disinformation, and lead to the creation of deepfakes. 

“The biggest AI risk the world face are how authoritarian countries such as China and Russia will develop and apply AI systems, one of which potentially to develop military applications with autonomous weapons. This is where pressure and controls should be applied. 

“Western countries, such as Australia, must develop AI frameworks that will guide the development of AI and identify areas where AI systems will not be developed.   

“The Australian government has now outlined its intention to regulate artificial intelligence, saying there are gaps in existing law and new forms of AI technology will need "safeguards" to protect society based upon a risk approach.  

“This move should be focused and supported. We shouldn’t focus on speculative doomsday scenarios.”  

Professor Matt Warren, Director of the Centre for Cyber Security Research and Innovation and a researcher in Cyber Security and Computer ethics. 

Fan Yang, Research Associate, School of Media and Communication 

Topics: AI design, technology prejudice, programming.   

“To what extent AI can be risky and impose human extinctions depends on how AI is designed, programmed, supervised, and used. 

“Technologies can be very different if humanity/care/environmental sustainability is centred, as opposed to productivity or efficiency, such as ChatGPT.  

“The problem lies in social prejudices, inequalities and injustice that have been embedded and inscribed in the long history of science, technology, and society (which many of us are not aware of until we experienced that our keyboard auto-corrected our name to an English word, Alexa cannot recognise our accents, or our Instagram’s filter reassigned us another race/ethnicity).  

“The headline that AI can cause human extinction is as eye-catching as the risk of technologies is more likely to be disproportionately distributed to the groups of people who are already socially disadvantaged – women, the minor, people of colour, etc.  

“The global pandemic and nuclear wars tell us the same story.  

"AI is part and parcel of capitalism where disposable labour has been historically used for the financial gain of the capitalist and intensifies exploitation and alienation among the groups of people who are already disadvantaged.” 

Fan Yang is a Research Associate at RMIT. She specialises on Australia-China relations through technologies, WeChat, AI and Chinese technologies.  


Contact details:

Interviews: 

Matt Duckham, 0477 031 744 or matt.duckham@rmit.edu.au  

 

Lisa Given, 0458 340 908 or lisa.given2@rmit.edu.au 

 

Matthew Warren, 0432 745 171 or [email protected] 

 

Fan Yang, 0452 099 210 or [email protected] 

 

General media enquiries: RMIT Communications, 0439 704 077 or [email protected]

More from this category

  • Games Gaming, Information Technology
  • 17/03/2026
  • 19:05
Delegation of the European Union to Australia

The European Union funded UNIHACK 2026, known as the Imagination Hackathon which brings together Australia’s top student innovators.

Key Facts: The EU sponsored Australia’s largest student hackathon for the first time, bringing together hundreds of students and recent gradutes ot develop and…

  • Contains:
  • Information Technology
  • 17/03/2026
  • 11:56
Skild AI

Skild AI Expands Generalized Robot Intelligence Across Industries With ABB Robotics, Universal Robots, and NVIDIA

Skild AI is partnering with ABB Robotics and Universal Robots to deploy its omni-bodied robot brain across industries and applications, from factory floors to collaborative systems, without task-by-task reprogrammingPITTSBURGH, March 16, 2026 (GLOBE NEWSWIRE) -- Skild AI, a pioneer in building generalized robot intelligence for any embodiment, today announced expanded collaborations with NVIDIA, ABB Robotics, and Teradyne Robotics’ Universal Robots (UR) and Mobile Industrial Robots (MiR) to deploy its AI-powered robot brain across multiple industries and applications. The company’s technology is to be shipped to production environments, including high-precision assembly for NVIDIA Blackwell systems with Foxconn.Skild AI’s mission is to…

  • Information Technology
  • 17/03/2026
  • 11:17
Edelman on behalf of Vertiv ANZ

Vertiv Strengthens New Zealand Leadership with Strategic Auckland Appointments

Auckland, New Zealand [March 17, 2026] – Vertiv (NYSE: VRT), a global provider of critical digital infrastructure, today announced two strategic appointments in Auckland, reinforcing its commitment to customers across New Zealand and the Pacific Islands. Rick Zhang has been named head of sales, New Zealand and Pacific Islands, where he will lead the company’s regional sales strategy, and customer engagement initiatives. With more than 20 years of experience across electrical, power protection, and digital infrastructure sectors, Zhang brings a proven track record in expanding market presence, and delivering infrastructure solutions across commercial, industrial, and data centre environments. Zhang joins…

Media Outreach made fast, easy, simple.

Feature your press release on Medianet's News Hub every time you distribute with Medianet. Pay per release or save with a subscription.