ru24.pro
News in English
Сентябрь
2024

All AI has done this election is convince Taylor Swift to endorse Harris

0
Fears around AI's influence on the election may be overblown, but they helped convince Taylor Swift to endorse Kamala Harris.
  • The reality of AI's impact on elections isn't matching up with the fears.
  • Despite the concerns, AI hasn't been a disruptive force in the 2024 race.
  • So far, AI's been used mainly for memes — and pushing Taylor Swift to endorse Kamala Harris.

For months, experts have feared that artificial intelligence could misinform voters and disrupt elections around the world.

But the mass havoc they expected hasn't really come to pass. Instead of deepfakes of political candidates fooling candidates and creating fact-check nightmares, AI has mainly been used by supporters to generate obvious meme art.

In fact, AI's biggest impact this year may have been simply convincing Taylor Swift to endorse Democratic presidential nominee Kamala Harris.

In an Instagram post announcing her support of Harris on Tuesday, the megastar said her endorsement was influenced in part by an AI image of her that Trump posted showing the pop megastar in a ridiculous oversized American flag hat with the phrase "Taylor Wants You To Vote For Donald Trump."

"It really conjured up my fears around AI, and the dangers of spreading misinformation," Swift wrote in her post. "It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth."

And the AI-wary Swift is in good company — experts and the media have sounded the alarm that AI could fuel a "tech-enabled Armageddon," that we've only seen the "tip of the iceberg," and that "deepfakes threaten to upend global elections."

But, while there have been some attempts to use AI to influence voters — like the phony Joe Biden robocall in New Hampshire, or a deepfaked Kamala Harris campaign video — they don't really seem to be fooling anyone.

Many AI creations have come in the form of fairly obvious memes and satiric videos shared on social media, and fact-checkers — including ones on-platform like X's Community Notes — have quickly shot down any AI content that is remotely convincing.

Even the more sinister attempts, in which foreign actors use AI to spread disinformation, may be a bit over-exaggerated.

Meta, for example, wrote in its most recent Adversarial Threat Report that while Russian, Chinese, and Iranian disinformation campaigns have tapped into AI, their "GenAI-powered tactics" have provided "only incremental productivity and content-generation gains."

And Microsoft, in its most recent Threat Intelligence Report from August, also threw water on the idea that AI has made foreign influence campaigns more effective.

Microsoft writes that in identifying Russian and Chinese influence operations, it found that both "have employed generative AI—but with limited to no impact," adding that another Russian operation, which it first reported in April, has "repeatedly utilized generative AI in its campaigns to little effect."

"In total," Microsoft continues in its report, "We've seen nearly all actors seek to incorporate AI content in their operations, but more recently many actors have pivoted back to techniques that have proven effective in the past—simple digital manipulations, mischaracterization of content, and use of trusted labels or logos atop false information."

And it's not just the US; recent elections around the world haven't been substantially impacted by AI.

The Australian Strategic Policy Institute, which analyzed instances of AI-generated disinformation around the UK's July election, found in a recent report that voters never faced the feared "tsunami of AI fakes targeting political candidates."

"The UK saw only a handful of examples of such content going viral during the campaign period," ASPI's researcher Sam Stockwell explained.

But, he added that, "While there's no evidence these examples swayed any large number of votes," there were "spikes in online harassment against the people targeted by the fakes" as well as "confusion among audiences over whether the content was authentic."

In a study published in May, the UK's Alan Turing Institute found that out of 112 national elections taking place or about to take place since the start of 2023, only 19 of them showed AI interference.

"Existing examples of AI misuse in elections are scarce, and often amplified through the mainstream media," the paper's authors wrote. "This risks amplifying public anxieties and inflating the perceived threat of AI to electoral processes."

But, while the researchers found that the "current impact of AI on specific election results is limited," the threats do "show signs of damaging the broader democratic system."

Read the original article on Business Insider