Majority of social media users unable to identify AI – report
Growing use of artificial intelligence is outpacing media literacy, a report in Australia has found
Media literacy among adults is not keeping pace with the rapid advancement of generative artificial intelligence (AI), according to research published in Australia on Monday. The trend is leaving internet users increasingly vulnerable to misinformation, the authors of the research said.
The AI industry exploded in 2022 after the launch of chatbot and virtual assistant ChatGPT by US AI research organization OpenAI. The sector has since attracted billions of dollars in investment, with tech giants such as Google and Microsoft offering tools such as image and text generators.
However, users’ confidence in their own digital media abilities remains low, according to the ‘Adult Media Literacy in 2024’ paper by Western Sydney University.
In a sample of 4,442 adult Australians, respondents were asked how confident they were to perform a series of 11 media-related tasks that required critical and technical abilities and/or knowledge. On average, respondents said they could complete just four out of the 11 tasks with confidence.
Read more
The results are “largely unchanged” since 2021, when previous research was conducted, the paper noted.
The ability to identify misinformation online has not changed at all, as per research data. In 2021 and in 2024, only 39% of responders said they were confident they could check if information they found online is true.
The recent integration of generative AI into online environments makes it “even more difficult for citizens to know who or what to trust online,” the report stated.
The slow growth in media literacy is particularly concerning given the ability of generative AI tools to produce high-quality deepfakes and misinformation, according to associate professor and research author Tanya Notley, as cited by the Decrypt media company.
“It’s getting harder and harder to identify where AI has been used. It’s going to be used in more sophisticated ways to manipulate people with disinformation, and we can already see that happening,” she warned.
Combatting this requires regulation, although this is happening slowly, Notley said.
READ MORE: Meta to flag AI-generated content
Last week, the US Senate passed a bill designed to protect individuals from the non-consensual use of their likeness in AI-generated pornographic content. The bill was adopted following a scandal involving deepfake pornographic images of US pop singer Taylor Swift that spread through social media earlier this year.
Australians now favor online content as their source for news and information as opposed to television and print newspapers, the report noted, adding that this represents a “milestone in the way in which Australians are consuming media.”