Deepfake content ‘explosion’ lies ahead
Deepfake artificial intelligence (AI) videos and photographs are an epidemic that will only worsen as more people gain access to technology and use it for jest or more sinister motives such as swindling their victims.
Information and communication technology (ICT) and legal experts have issued this warning in the wake of deepfake images that have featured the likes of SABC journalist Leanne Manas, business people Johann Rupert and Elon Musk, as well as government leaders including President Cyril Ramaphosa and former police minister Bheki Cele.
Recent deepfake videos have included adverts for online trading platforms that lead people to invest after being contacted by consultants from a company called Banxso. The videos that pop up on Facebook and Youtube feature AI-altered videos of Manas, Musk, Rupert, Nicky Oppenheimer and Patrice Motsepe, purportedly claiming that investors can make excellent returns through online trading.
Scores of investors who lost millions of rands have filed complaints against Banxso with the Financial Sector Conduct Authority, which is investigating the matter. Banxso is working with the authority and says it does not know who is behind the fake video adverts that appear to be providing its consultants with leads.
In April police arrested Pietermaritzburg resident Scebi Thabisso Nene after he allegedly distributed pornographic images featuring the faces of Ramaphosa, Cele and his wife superimposed on the bodies of unknown individuals that were circulating online. Nene was charged with contravening the Cyber Crimes Act and two counts of crimen injuria.
This type of content is likely to increase, warned Arthur Goldstuck, an ICT expert and author of The Hitchhiker’s Guide to AI.
“We’re going to see an explosion of deepfake videos, but we must bear in mind that deepfakes have been a factor of misinformation, propaganda and manipulation on social media in South Africa for some years,” Goldstuck said.
“The difference now is that the tools to create such content are available to all. As the tools become better, we’ll also see a dramatic increase in the amount of deepfake content being distributed or used to blackmail, coerce, convince and manipulate. Sadly, individuals have never been more willing to be manipulated, if the messaging aligns with their worldviews.”
Goldstuck said deepfake videos would “definitely have legal implications if used for blackmail or coercion”, but one had to look at how the video was used and the extent to which it violated the rights of others.
Unlike other countries, South African law does not explicitly criminalise deepfakes, said Lisa Emma-Iwuoha, an ICT specialist at legal firm Michalsons. But crimes related to deepfakes are covered under the provisions of the Cybercrimes Act.
According to section 8 of the Act, anyone who unlawfully, and with the intention to defraud, makes a misrepresentation using data or a computer program, or by interfering with data or a computer program, which causes actual or potential prejudice to another person, is guilty of the offence of cyber fraud.
“Put simply, if you use or alter data or a computer program intending to defraud someone and they are actually or potentially prejudiced, then you are guilty of cyber fraud. The essence of deepfakes is using tools to manipulate or alter data like images, audio or videos. Still, for it to be a crime, you would need to have the intention to defraud someone else as well,” Emma-Iwuohashe said.
Victims of phishing such as Manas, Rupert, Motsepe and others could also file a civil claim under the Protection of Personal Information Act (POPIA), she said, adding: “However, it may be tricky to establish and identify the responsible party or operator who didn’t fulfil their data protection obligations to pursue the matter. Every country is grappling with deepfakes and effective ways to protect their citizens from them.”
The European Union, United Kingdom, United States, Australia and China have all taken steps to address deepfakes, Emma-Iwuoha noted.
“Deepfakes are a big issue within South Africa and globally, especially as the use of AI is becoming normalised in various contexts. In general, people often take what they see online at face value and usually believe it is true without fact-checking or interrogating it. As information moves quickly in this social media age, disinformation like deepfakes spreads quickly,” she said.
According to social media law expert Emma Sadlier, individuals hold the right to the commercial use of their own images.
“It’s almost like under intellectual property law that people can’t use your image in their advertising unless, unless they have your permission. There’s no doubt that the same would apply in deepfake content, whether it’s real or fake,” Sadlier said.
“There’s often deepfakes being used in extortion cases. I see it often where kids, particularly, somebody, creates a deepfake porn picture of them, and then they start getting extorted.
“I’m seeing even primary school kids creating deepfakes, porn pictures of other kids in their class. I’m getting a call from the principal at least once a week, usually once a day, to say that the kids at the school have got hold of this app … and that they’re turning their classmates into nudes, almost for humour rather than anything else.”
Sadlier said deepfake content could lead to charges of fraud and crimen injuria depending on the nature of the images.
“If the image of the person is used in a way that causes reputational harm, there’s very likely to be criminal urea, which is a criminal offence. I I think that it just about any instance where somebody used your image to cause another person harm, whether that’s commercial harm or whatever kind of harm it is whether it’s scam case or to fleece out of their money or to extort them then it is very likely it would be crimen injuria,” she said.
The real challenge is to find out who created and shared the videos or photographs.
“The keeper of the keys is usually the platform that the content was created on, whether that is Facebook or Instagram or Tiktok or WhatsApp,” Sadlier said.
“The issue is that they will only hand over the basic subscriber information, which is basically everything they know about who is behind an account, on a court order. But they don’t listen to South African court orders, which is ridiculous. So you’ve got to follow them to the jurisdiction.”
This would mean getting an order in the United States if one was dealing with Meta, the owner of Facebook, or in Singapore to tackle Tik Tok deepfakes.
“It’s a trans-jurisdictional nightmare, and it’s so expensive,” Sadlier added.
Online platforms such as Youtube, which is owned by Google and Meta’s Facebook, are aware of the rising problem and say they are investing in technology to combat the crime.
A spokesperson for Meta said scammers are using “every avenue available” to defraud people and constantly adapt to evade enforcement.
“Content that purposefully intends to deceive or exploit others for money violates our policies, and we remove violating this content when it’s found. We continue to invest in detection technology, review teams and we share information with law enforcement so they can prosecute scammers,” he said.
A Youtube spokesperson said protecting the platform’s users is “a top priority” and it has “strict policies” governing adverts.
“These scams are prohibited on our platforms and when we find ads that breach our policies we take immediate action, including removing the ads and suspending the account when necessary. We are investing heavily in our ability to detect and remove deepfake scam ads and the bad actors behind them,” he said.
According to Google, it uses a combination of machine learning and human review to enforce its policies, and its systems proactively monitor videos and livestreams to detect and remove deceptive behaviour.
In the fourth quarter of 2024 Google removed more than 19 million channels and 146 932 videos for violating its spam, deceptive practices and scams policy. It also removed about 9 million videos for violating its policies, and over 96% of them were “first flagged by machines rather than humans”.
“More than 96% of the violative videos first detected by machines received fewer than 1K views before they were removed from YouTube. Of the videos detected by machines, 53.46% were removed before they received a single view,” Google said.
Futurist and the director at the Institute for Technology Strategy and Innovation, Pieter Geldenhuys, said despite the dark side of AI being used to create deepfake content, he foresees a future where it will be used for legitimate commercial interests to advertise products during sports events.
Using 5G communication technology and an advanced camera that can record stereoscopic 360-degree video, people around the world will be able to put on their virtual reality (VR) goggles and instantly be inside a soccer stadium. The camera technology would have eight lenses and microphones to ensure the viewer has “complete 360-degree access to all images captured by the camera in real-time”.
“They can choose to jump from seat to seat to obtain the best view of the game as it unfolds. Viewers will be able to watch the game in real-time as if they are in the stadium, and do a virtual high-five with fans surrounding each VR camera. All of this is available from the comfort of your favourite couch,” Geldenhuys said.
“The strategic positioning of the cameras throughout the stands also unlocks new advertising opportunities. Hired models wearing branded clothing, for example, can be used to promote a diverse array of brands.”
This is where the use of deepfakes come in: the branding can be personalised for every viewer, giving advertisers the ultimate dream of precisely reaching their target markets.