Skip to content
General News, Internet

New recommendations to curb harms of generative AI

eSafety Commissioner 3 mins read

eSafety’s new tech trends position statement on generative AI provides specific Safety by Design interventions that industry can adopt immediately to improve user safety and empowerment.

“If industry fails to systematically embed safety guardrails into generative AI from the outset, harms will only get worse as the technology becomes more widespread and sophisticated,” eSafety Commissioner Julie Inman Grant said.

“This month, we received our first reports of sexually explicit content generated by students using this technology to bully other students. That’s after reports of AI-generated child sexual abuse material and a small but growing number of distressing and increasingly realistic deepfake porn reports.

“The danger of generative AI is not the stuff of science fiction. Harms are already being unleashed, causing incalculable harm to some of our most vulnerable. Our colleagues in hotlines, NGOs and in law enforcement are also starting to see AI-generated child sexual abuse material being shared. The inability to distinguish between children who need to be rescued and synthetic versions of this horrific material could complicate child abuse investigations by making it impossible for victim identification experts to distinguish real from fake.

“And it has long been a concern that AI is being trained on huge datasets whose balance, quality and provenance have not been established, reinforcing stereotypes and perpetuating discrimination.

“Industry cannot ignore what’s happening now. Our collective safety and wellbeing are at stake if we don’t get this right.”

Incorporating advice from respected domestic and international AI experts, the paper details a range of safety measures and interventions, such as:

  • appropriately resourced trust and safety teams
  • age-appropriate design supported by robust age-assurance measures
  • red-teaming and violet-teaming before deployment
  • routine stress tests with diverse teams to identify potential harms
  • informed consent measures for data collection and use
  • escalation pathways to engage with law enforcement, support services or illegal content hotlines, like eSafety
  • real-time support and reporting
  • regular evaluation and third-party audits.

“There is no question that generative AI holds tremendous opportunities, including the potential to contribute to an exciting era of creativity and collaboration. Advanced AI promises more accurate illegal and harmful content detection, helping to disrupt serious online abuse at pace and scale,” Ms Inman Grant said.

“Let’s learn from the era of ‘moving fast and breaking things’ and shift to a culture where safety is not sacrificed in favour of unfettered innovation or speed to market.  If these risks are not assessed and effective and robust guardrails integrated upfront, the rapid proliferation of harm escalates once released into the wild. Solely relying on post-facto regulation could result in a huge game of whack-a-troll.”

The tech trends position statement also sets out eSafety’s current tools and approaches to generative AI, which includes education for young people, parents and educators; reporting schemes for serious online abuse; transparency tools; and the status of mandatory codes and standards.

Recently registered codes will soon require some social media services to take proactive steps to detect and remove child sexual exploitation material. eSafety is also currently considering a revised Search Engine Code, which directly considers generative AI. Mandatory standards are also being developed for Relevant Electronic Services and Designated Internet Services.

“While our regulatory powers around online safety are the longest standing in the world, regulation can only take us so far.

“Online safety requires a coordinated, collaborative global effort by law enforcement agencies, regulators, NGOs, educators, community groups and the tech industry itself. Harnessing all the positives of generative AI, while engineering out the harms, requires a whole-society response.”

The position paper also recommends that users understand what personal information is being accessed from the open web when using generative AI, and then take steps to protect their data. More information on popular generative AI-enabled services (such as Bing, Google Bard, Chat GPT and GPT-4) can be found on the eSafety Guide.

To read the full paper, visit: eSafety.gov.au/industry/tech-trends-and-challenges

To report online harm, including child sexual abuse material and deepfake pornography, visit eSafety.gov.au/report

If you know a child being groomed or has had explicit material of them shared, or threatened to be shared, report it to the Australian Federal Police-led Australian Centre for Countering Child Exploitation: accce.gov.au/report


About us:

The eSafety Commissioner is Australia’s independent regulator for online safety. Our purpose is to help safeguard all Australians from online harms and to promote safer, more positive online experiences.

eSafety acts as a safety net for Australians who report cyberbullying, serious online abuse or image-based abuse. We can also investigate and remove seriously harmful illegal and restricted content, including online child sexual exploitation material. Find out more: eSafety.gov.au


Contact details:

For media inquiries, contact media@esafety.gov.au or phone 0439 519 684 (calls only, no text)

More from this category

  • General News, Political
  • 23/10/2024
  • 10:53
Monash University

Monash experts: Latest polls show US election race is closer than ever

Latest polls ahead of the United States presidential election on 5 November show the gap between Kamala Harris and Donald Trump closing tightly, with the candidates now essentially tied across five key battleground states. With just two weeks left until polling day, experts are predicting this may be the closest election in recent years and the coming days are crucial for each candidate as they look to shore up support from swing voters. Available to comment on the US election generally and specific areas listed: Dr Tom Chodor, Senior Lecturer in Politics and International Relations Contact: tom.chodor@monash.edu Special interests include…

  • General News, Mental Health
  • 23/10/2024
  • 07:15
HCF

1 in 2 Parents Worried About Their Teen’s Wellbeing

WEDNESDAY, 23 OCTOBER 2024: Australian teenagers are grappling with unprecedented mental health challenges, with new research1 from HCF, Australia’s largest not-for-profit health fund, showing…

  • Contains:
  • COVID19, General News
  • 22/10/2024
  • 13:30
Monash University

Monash expert: Was our COVID pandemic response justified?

The public health response to the COVID-19 pandemic included a range of contentious personal and public restrictions including long lockdowns imposed on schools and businesses.Now, a new book titled Pandemic Societies by Alan Petersen is critically examining these public health management strategies and considering what future pandemics may bring, including the expansion of technologies of surveillance and control, as well as opportunities for renewal caused by economic and social disruption.Available to comment:Alan Petersen, Professor of Sociology, Monash UniversityContact: +61 420 772 356 or alan.petersen@monash.edu The following can be attributed to Professor Petersen:“A major question arising from my analysis of the…

Media Outreach made fast, easy, simple.

Feature your press release on Medianet's News Hub every time you distribute with Medianet. Pay per release or save with a subscription.