18 JANUARY, 2024
Who: Charles Darwin University Computational and Artificial Intelligence expert Associate Professor Niusha Shafiabady discusses the Australian Government’s Safe and Responsible AI in Australia paper. Associate Professor Shafiabady is an internationally recognised expert and developer of AI data analysis platform Ai-Labz.
- The Australian Government’s Safe and Responsible AI in Australia discussion paper.
- Artificial Intelligence, machine learning, data analysis, modelling, deep learning and more.
- Combining academic knowledge and research into industrial applications.
- The impact of Artificial Intelligence in the workplace.
Contact details: Call +61 8 8946 6721 or email firstname.lastname@example.org to arrange an interview.
Quotes attributable to Associate Professor Niusha Shafiabady:
“The discussion paper talks about issues with AI that everyone who reads the news has heard about, like misinformation and disinformation and collaboration with industry to ‘develop options for voluntary labelling and watermarking of AI-generated materials’.
“It also mentions that firms like ‘Microsoft, Google, Salesforce and IBM’ are ‘already adopting ethical principles’ in using AI.
“It is good fun to read the executive order US President Joe Biden signed on October 30, 2023 on Safe and Trustworthy use of AI, and compare it with the Australian Government’s discussion paper.
“To me, threats AI and technology pose to people are two types: long-term and short-term. The paper our government has released lacks the long-term threats altogether. AI is changing our education system and the way the kids grow up and learn at schools.
“AI will be displacing many jobs. To what extent are we going to allow it to be integrated in our lives? Are we thinking strategically or put our faith in big firms’ hands for saying they are ‘already adopting ethical principles’? How are we going to create mandatory guardrails for ‘testing’, ‘transparency’ and ‘accountability’ through collaboration with industry? Who are our industry experts who verify if an AI system is ‘biased’ or not?
“The first question we should ask ourselves, in my opinion, is how are the AI systems being created and who are the people developing them? These days, the so-called ‘AI experts’ are people who have learnt to use free toolboxes or a paid tool which are basically distributed by the big firms like Microsoft, Google, Salesforce and IBM.
“What are these tools doing? Do the developers even know that there are ways to avoid issues like ‘bias’ for the AI systems? Do they have enough knowledge and training to develop the systems that are less likely viable to making mistakes? Can big firms that the governments are paying millions of dollars to use their services, be trusted?
“This paper, Safe and responsible AI in Australia, is stored in Google space.
“We need regulations and enforcement, not just talking about things that are good ideas. Misinformation and disinformation are serious threats. We need regulations to mandate watermarking the fake material.
“In the US government’s executive order, they specifically mention what they are implementing and what needs to be done. Here, we are putting our faith and the fate of the people in hands of industry’s good faith. Sorry but this wouldn’t work. If we don’t take the threat of technology seriously and come up with mandatory regulations, we will feel the blow as a nation. It is time to act now.”
Raphaella Saroukos she/her