Skip to content
Information Technology, National News Current Affairs

Claiming AI can lead to human extinction is an overreaction: RMIT AI experts

RMIT University 4 mins read

Professor Matt Duckham, Professor in Geospatial Sciences. 

Topics: AI, overreaction, disruption, discrimination.  

“The statement could be viewed as an astonishing overreaction, and something I expect its signatories will look back on sheepishly in the coming month and years.  

“To compare this technology with the truly existential threats we face today such as climate change, war, pandemics, is simply absurd.  

“No matter how surprising or remarkable the new AI capabilities are, this technology is just statistical models of word frequencies. 

“The technology nevertheless marks a remarkable and exciting milestone in AI. It is surely causing big changes and disruptions in many industries and sectors of society, which are only set to grow over the next few years.  

“Many of those disruptions will be negative, but hopefully many more will be positive. None will be apocalyptic though.  

“The real harm of this technology lies in their subtle amplification of discrimination, inequity, exploitation, and entrenched advantage – which are already evident and growing in the use of AI in industry, institutions, and society for some time.” 

Professor Matt Duckham is a Professor in Geospatial Sciences, with expertise in spatial computing, geographic information science and geo-visualisation.  

Professor Lisa Given, Enabling Impact Platform Director, Social Change, Research and Innovation Capability 

Topics: AI panic, AI tools, risks and harms. 

“'The risk of extinction from AI is highly speculative and does not compare to the real and immediate global risks humanity faces, such as climate change and the COVID-19 pandemic – real and tangible concerns that governments need to address globally. 

“When institutions issue warning statements like this, they risk creating unnecessary panic about future, potential technologies that may never materialise.  

“They also focus people’s attention away from the real risks posed by AI tools today (such as misinformation, bias, lack of transparency, potential for abuse), which are causing real harm. 

“The public has been exploring daily the usefulness of AI tools yet are often unaware of the real limitations of these systems and the risks of adopting these emerging technologies. 

“Tools that use copyrighted materials without consent, that present false information using a convincing and empathetic tone of voice, that pre-screen job applicants against biased datasets, and that enable image-based abuse – are just some of the real harms we are seeing today. This is where regulation, transparency, and scrutiny are needed urgently. 

“AI tools have many benefits to offer humanity, but we need to be critical and careful about how these tools are used for the betterment of society.  

“This requires us to question who has control of these tools, what people and companies may gain (or lose) from how they are used, and what steps we need to take, as a society, to ensure people and companies use these tools appropriately.” 

Professor Lisa Given is Professor of Information Sciences and Director of RMIT’s Social Change Enabling Impact Platform. Her research examines people’s use of technology tools for decision-making in business contexts and everyday life. 

Professor Matthew Warren, Director, Centre for Cyber Security Research and Innovation  

Topics: cyber security, autonomous weapons, government regulation. 

“Doomsday scenarios on AI are nothing new. Remember in 2014, Stephen Hawking warned AI could end mankind? Or in 2014 when Elon Musk warned that, with artificial intelligence, we are summoning the demon?  

“Now the Center for AI Safety claims AI systems could lead to human extinction, it is all hype that takes away from real challenges from global warming, pollution, famine, and more.  

“Will these generative AI models end humanity? No.  

“AI systems should be embraced as they will help improve the societies in many ways and in ways we do not even understand now. 

“What they will have a negative impact on society is widespread job losses, disinformation, and lead to the creation of deepfakes. 

“The biggest AI risk the world face are how authoritarian countries such as China and Russia will develop and apply AI systems, one of which potentially to develop military applications with autonomous weapons. This is where pressure and controls should be applied. 

“Western countries, such as Australia, must develop AI frameworks that will guide the development of AI and identify areas where AI systems will not be developed.   

“The Australian government has now outlined its intention to regulate artificial intelligence, saying there are gaps in existing law and new forms of AI technology will need "safeguards" to protect society based upon a risk approach.  

“This move should be focused and supported. We shouldn’t focus on speculative doomsday scenarios.”  

Professor Matt Warren, Director of the Centre for Cyber Security Research and Innovation and a researcher in Cyber Security and Computer ethics. 

Fan Yang, Research Associate, School of Media and Communication 

Topics: AI design, technology prejudice, programming.   

“To what extent AI can be risky and impose human extinctions depends on how AI is designed, programmed, supervised, and used. 

“Technologies can be very different if humanity/care/environmental sustainability is centred, as opposed to productivity or efficiency, such as ChatGPT.  

“The problem lies in social prejudices, inequalities and injustice that have been embedded and inscribed in the long history of science, technology, and society (which many of us are not aware of until we experienced that our keyboard auto-corrected our name to an English word, Alexa cannot recognise our accents, or our Instagram’s filter reassigned us another race/ethnicity).  

“The headline that AI can cause human extinction is as eye-catching as the risk of technologies is more likely to be disproportionately distributed to the groups of people who are already socially disadvantaged – women, the minor, people of colour, etc.  

“The global pandemic and nuclear wars tell us the same story.  

"AI is part and parcel of capitalism where disposable labour has been historically used for the financial gain of the capitalist and intensifies exploitation and alienation among the groups of people who are already disadvantaged.” 

Fan Yang is a Research Associate at RMIT. She specialises on Australia-China relations through technologies, WeChat, AI and Chinese technologies.  


Contact details:

Interviews: 

Matt Duckham, 0477 031 744 or matt.duckham@rmit.edu.au  

 

Lisa Given, 0458 340 908 or lisa.given2@rmit.edu.au 

 

Matthew Warren, 0432 745 171 or matthew.warren2@rmit.edu.au 

 

Fan Yang, 0452 099 210 or fan.yang2@rmit.edu.au 

 

General media enquiries: RMIT Communications, 0439 704 077 or news@rmit.edu.au

More from this category

  • Business Company News, Information Technology
  • 26/07/2024
  • 13:51
Data#3

Data#3 inducted into the Queensland Business Leaders Hall of Fame

Data#3 inducted into the Queensland Business Leaders Hall of Fame July 26, 2024; Brisbane, Australia: Leading Australian technology services and solutions provider, Data#3, is proud to announce that it has been inducted into the Queensland Business Leaders Hall of Fame. Data#3 accepted the Inductee Trophy at a dinner held at the Brisbane Convention and Exhibition Centre. The trophy was presented by The Honourable Grace Grace MP in recognition of the company’s continued excellence and outstanding innovation in providing technology solutions and services throughout Australia. Data#3 CEO and Managing Director, Brad Colledge, accepted the honour on stage at the event, and…

  • Contains:
  • Information Technology, Legal
  • 26/07/2024
  • 00:05
Law Society of NSW

Guidance for time-honoured profession to navigate an AI future

Friday, 26 July 2024 Guidance for time-honoured profession to navigate an AI future The Law Society of NSW has joined with LexisNexis, a leading…

  • Contains:
  • Information Technology
  • 25/07/2024
  • 23:10
GRAVITY Co., Ltd.

Gravity Announced Cute and Dark Puzzle Adventure ‘PIGROMANCE’ Official Launch on Steam!

- Award Winning Gameplay and Creativity from Various Global Game Awards- Now Celebrating Official Launch with 20% DiscountSEOUL, South Korea, July 25, 2024 (GLOBE NEWSWIRE) -- Global game developer and publisher Gravity announced the official release of the puzzle adventure ‘PIGROMANCE’ on Steam on July 25th.The puzzle adventure ‘PIGROMANCE’ elaborates the story of a pig born with the fate of becoming a sausage, escaping from a sausage factory to find its love. Players can enjoy solving puzzles while escaping from the Cuttingman and navigating the dangerous obstacles lurking throughout the sausage factory. ‘PIGROMANCE’ features cute chibi graphics with contrasting vibes…

Media Outreach made fast, easy, simple.

Feature your press release on Medianet's News Hub every time you distribute with Medianet. Pay per release or save with a subscription.