Skip to content
Internet, Media

Australian media need generative AI policies to help navigate misinformation and disinformation

RMIT University 4 mins read

New research into generative AI images shows only over a third of media organisations surveyed at the time of research have an image-specific AI policy in place.  

The study, led by RMIT University in collaboration with Washington State University and the QUT Digital Media Research Centre, interviewed 20 photo editors or related roles from 16 leading public and commercial media organisations across Europe, Australia and the US about their perceptions of generative AI technologies in visual journalism.   

Lead researcher and RMIT Senior Lecturer, Dr TJ Thomson, said while most staff interviewed were concerned about the impact of generative AI on misinformation and disinformation, factors that compound the issue, such as the scale and speed at which content is shared on social media and algorithmic bias, were out of their control. 

“Photo editors want to be transparent with their audiences when generative AI technologies are being used, but media organisations can't control human behaviour or how other platforms display information,” said Thomson, from RMIT’s School of Media and Communication. 

“Audiences don’t always click through to learn more about the context and attribution of an image. We saw this happen when AI images of the Pope wearing Balenciaga went viral, with many believing it was real because it was a near-photorealistic image shared without context. 

“Photo editors we interviewed also said images they receive don’t always specify what sort of image editing has been done, which can lead to news sites sharing AI images without knowing, impacting their credibility.” 

Thomson said having policies and processes in place that detail how generative AI can be used across different communication forms could prevent incidents of mis- and disinformation, such as the altered images of Victorian MP Georgie Purcell, from happening.  

“More media organisations need to be transparent with their policies so their audiences can also trust that the content was made or edited in the ways the organisation says it is,” he said. 

Banning generative AI use not the answer 

The study found five of the surveyed outlets barred staff from using AI to generate images, and three of those outlets only barred photorealistic images. Others allowed AI-generated images if the story was about AI. 

“Many of the policies I’ve seen from media organisations about generative AI are general and abstract. If a media outlet creates an AI policy, it needs to consider all forms of communication, including images and videos, and provide more concrete guidance,” Thomson said. 

“Banning generative AI outright would likely be a competitive disadvantage and almost impossible to enforce.

“It would also deprive media workers of the technology’s benefits, such as using AI to recognise faces or objects in visuals to enrich metadata and to help with captioning.” 

Thomson said Australia was still at “the back of the pack” when it came to AI regulation, with the US and the EU leading. 

“Australia’s population is much smaller, so our resources limit our ability to be flexible and adaptive,” he said. 

“However, there is also a wait-and-see attitude where we are watching what other countries are doing so we can improve or emulate their approaches. 

“I think it’s good to be proactive, whether that’s from government or a media organisation. If we can show we are being proactive to make the internet a safer place, it shows leadership and can shape conversations around AI.” 

Algorithmic bias affecting trust  

The study found journalists were concerned about how algorithmic bias could perpetuate stereotypes around gender, race, sexuality and ability, leading to reputational risk and distrust of media.  

“We had a photo editor in our study type a detailed prompt into a text-to-image generator to show a South Asian woman wearing a top and pants,” Thomson said. 

“Despite detailing the woman’s clothing, the generator persisted with creating an image of a South Asian woman wearing a sari.” 

“Problems like this stem from a lack of diversity in the training data, and it leads us to question how representative are our training data, and what can we do to think about who is being represented in our news, stock photos but also cinema and video games, which can all be used to train these algorithms.”  

Copyright was also a concern for photo editors as many text-to-image generators were not transparent about where their source materials came from.  

While there have been generative AI copyright cases making their way into the courts, such as The New York Times’ lawsuit against OpenAI, Thomson said it’s still an evolving area. 

“Being more conservative and only using third-party AI generators that are trained on proprietary data or only using them for brainstorming or research rather than publication can lessen the legal risk while the courts settle the copyright question,” he said.  

“Another option is to train models with an organisation's own content and that way they have confidence they own copyright to resulting generations.” 

Generative AI is not all bad 

Despite concerns about mis- and disinformation, the study found most photo editors saw many opportunities for using generative AI, such as brainstorming and generating ideas. 

Many were happy to use AI to generate illustrations that were not photorealistic, while others were happy to use AI to generate images when they don’t have good existing stock images.  

“For example, existing stock images of bitcoin all look quite similar, so generative AI can help fill a gap in what is lacking in a stock image catalogue,” Thomson said.  

While there was concern about losing photojournalism jobs to generative AI, one editor interviewed said they could imagine using AI for simple photography tasks. 

“Photographers who are employed will get to do more creative projects and less tasks like photographing something on a white background,” said the interviewed editor.   

“One could argue that those things are also very easy and simple and take less time for a photographer, but sometimes they’re a headache too.” 

Generative Visual AI in News Organizations: Challenges, Opportunities, Perceptions, and Policies” was published in Digital Journalism. (DOI: 10.1080/21670811.2024.2331769) 

T.J. Thomson (RMIT University), Ryan Thomas (Washington State University) and Phoebe Matich (Queensland University of Technology) are co-authors.  

Thomson was a visiting fellow at the German Internet Institute in Berlin, which allowed him to complete the European portion of this research. 

Contact details:

For interviews, contact Dr TJ Thomson: 0435 071 252 or  

General enquiries: 0439 704 077 or  

More from this category

  • Media
  • 18/05/2024
  • 00:07
Baden Bower

Baden Bower Globalizes New York Brands’ Presence with Strategic Media Partnerships

SYDNEY, Australia, May 17, 2024 (GLOBE NEWSWIRE) -- Baden Bower, one of the leading PR firms in New York, has initiated strategic partnerships with leading media outlets to expand the global presence of its clients' brands. By securing placements on authoritative news sites and utilizing a broad influencer network, Baden Bower boosts the visibility and credibility of the brands it serves.AJ Ignacio, CEO of Baden Bower, emphasizes the necessity of a dynamic, multi-channel public relations approach that guarantees client placements. "Our unique publicity model, supported by a money-back guarantee, provides brands the assurance to pursue impactful PR strategies that yield…

  • Information Technology, Internet
  • 16/05/2024
  • 07:01
Monash University

New research to make digital transactions quantum safe and twenty times faster

A team of experts, including Monash University researchers, has developed a new technique to implement quantum-safe digital signatures twenty times faster, resulting in speedier and safer online transactions than ever before. The research, published recently in IEEE Transactions on Parallel and Distributed Systems, is the first to develop a much faster way to implement Falcon - a post-quantum digital signature scheme - for Graphic Processing Units (GPUs). Co-author of the research and quantum-safe cryptography expert, Associate Professor Ron Steinfeld from Monash University’s Faculty of Information Technology, said the world is increasingly moving towards quantum-safe computer systems and Falcon is one…

  • Defence, Media
  • 14/05/2024
  • 14:46
Monash University

Monash Expert: David McBride sentencing

Monash Expert: David McBride sentencing Following the sentencing of former army lawyer, David McBride, for stealing commonwealth information and passing it to the ABC, a Monash expert is available to comment on the case and its wider implications around whistleblowing and leaking information. Associate Professor Emma Briant, Associate Professor of News and Political Communication, Faculty of Arts Contact details: +61 3 9903 4840 or Read more about Associate Professor Briant’s articles on the Jack Teixeira case and leaks in policy press The following can be attributed to Associate Professor Briant: “Today the West's enemies have learned to hack documents…

Media Outreach made fast, easy, simple.

Feature your press release on Medianet's News Hub every time you distribute with Medianet. Pay per release or save with a subscription.