Skip to content
Information Technology, Political

CDU EXPERT: Discussion paper misses mark on taking AI threats seriously, expert

Charles Darwin University 2 mins read

18 JANUARY, 2024

Who: Charles Darwin University Computational and Artificial Intelligence expert Associate Professor Niusha Shafiabady discusses the Australian Government’s Safe and Responsible AI in Australia paper. Associate Professor Shafiabady is an internationally recognised expert and developer of AI data analysis platform Ai-Labz.

Topics:

  • The Australian Government’s Safe and Responsible AI in Australia discussion paper.
  • Artificial Intelligence, machine learning, data analysis, modelling, deep learning and more. 
  • Combining academic knowledge and research into industrial applications.
  • The impact of Artificial Intelligence in the workplace.

Contact details: Call +61 8 8946 6721 or email [email protected] to arrange an interview.

Quotes attributable to Associate Professor Niusha Shafiabady:

“The discussion paper talks about issues with AI that everyone who reads the news has heard about, like misinformation and disinformation and collaboration with industry to ‘develop options for voluntary labelling and watermarking of AI-generated materials’.

“It also mentions that firms like ‘Microsoft, Google, Salesforce and IBM’ are ‘already adopting ethical principles’ in using AI.

“It is good fun to read the executive order US President Joe Biden signed on October 30, 2023 on Safe and Trustworthy use of AI, and compare it with the Australian Government’s discussion paper.  

“To me, threats AI and technology pose to people are two types: long-term and short-term. The paper our government has released lacks the long-term threats altogether. AI is changing our education system and the way the kids grow up and learn at schools.

“AI will be displacing many jobs. To what extent are we going to allow it to be integrated in our lives? Are we thinking strategically or put our faith in big firms’ hands for saying they are ‘already adopting ethical principles’? How are we going to create mandatory guardrails for ‘testing’, ‘transparency’ and ‘accountability’ through collaboration with industry? Who are our industry experts who verify if an AI system is ‘biased’ or not?

“The first question we should ask ourselves, in my opinion, is how are the AI systems being created and who are the people developing them? These days, the so-called ‘AI experts’ are people who have learnt to use free toolboxes or a paid tool which are basically distributed by the big firms like Microsoft, Google, Salesforce and IBM.

“What are these tools doing? Do the developers even know that there are ways to avoid issues like ‘bias’ for the AI systems? Do they have enough knowledge and training to develop the systems that are less likely viable to making mistakes? Can big firms that the governments are paying millions of dollars to use their services, be trusted?

“This paper, Safe and responsible AI in Australia, is stored in Google space.

“We need regulations and enforcement, not just talking about things that are good ideas. Misinformation and disinformation are serious threats. We need regulations to mandate watermarking the fake material.

“In the US government’s executive order, they specifically mention what they are implementing and what needs to be done. Here, we are putting our faith and the fate of the people in hands of industry’s good faith. Sorry but this wouldn’t work. If we don’t take the threat of technology seriously and come up with mandatory regulations, we will feel the blow as a nation. It is time to act now.”


Contact details:

Raphaella Saroukos she/her

Marketing, Media & Communications
Larrakia Country
T: +61 8 8946 6721
E: [email protected]
W: cdu.edu.au

More from this category

  • Information Technology
  • 14/01/2026
  • 00:11
Securonix

ThreatQuotient Celebrates Award-Winning Year of Innovation and Expansion

ThreatQuotient Honoured with Seven Prestigious Awards Showcasing Leadership in Threat Detection and Response Melbourne, AUS. 13th January 2026 – ThreatQuotient, a Securonix company and leader in threat intelligence platforms, has experienced an exceptional year of innovation and development. This has been driven by remarkable achievements, including being named for the fourth consecutive year as Technology Leader in the analyst QKS Group’s SPARK Matrix for Digital Threat Intelligence management, as well as six other industry accolades, which have helped it achieve unprecedented momentum in the threat intelligence sector. In total, ThreatQuotient earned seven industry awards in 2025 for threat intelligence and security automation, including: SPARK Matrix for Digital Threat Intelligence Management: For the fourth consecutive year,…

  • Business Company News, Information Technology
  • 13/01/2026
  • 09:37
DoxAI

DoxAI Appoints Former Macquarie Capital Executive Roberto Purcaro to its Board of Directors to Accelerate Global Expansion

Key Facts: DoxAI has appointed Roberto Purcaro as Non-Executive Director, bringing over 20 years of international investment banking experience from his role as Global Head of Complex Opportunities at Macquarie CapitalThe appointment aims to strengthen DoxAI's strategic oversight for international expansion, leveraging Purcaro's extensive global network and expertise in complex transactionsDoxAI is an AI automation and document-intelligence platform within Lakeba Group, serving various industries with enterprise-grade solutionsThe company's board has been enhanced at a crucial growth phase, with support from both the Chairman Giuseppe Porcelli and fellow Non-Executive Director Kevin WoThe platform specialises in document processing functions including classification, extraction,…

  • Information Technology
  • 13/01/2026
  • 03:11
1X Technologies

1X Unveils Paradigm Shift In Humanoid AI: NEO’s Starting to Learn On Its Own

PALO ALTO, Calif., Jan. 12, 2026 (GLOBE NEWSWIRE) -- 1X is excited to announce the new 1X World Model, a groundbreaking AI update for NEO, marking a major leap forward in humanoid robotics. The new 1X World Model enables NEO to turn any request into an AI capability on demand, using a video model grounded in real-world physics. This marks the first major step toward a future where robots can teach themselves to do anything a human can.With this update, NEO leverages internet-scale video data fine-tuned on robot data to perform AI tasks, even with objects and environments it has…

Media Outreach made fast, easy, simple.

Feature your press release on Medianet's News Hub every time you distribute with Medianet. Pay per release or save with a subscription.