2024: AI Panic Flooded The Zone, Leading To A Backlash
Last December, we published a recap, “2023: The Year of AI Panic.”
Now, it’s time to ask: What happened to the AI panic in 2024?
TL;DR – It was a rollercoaster ride: AI panic reached a peak and then fell down.
Two cautionary tales: The EU AI Act and California’s SB-1047.
Please note: 1. The focus here is on the AI panic angle of the news, not other events such as product launches. The aim is to shed light on the effects of this extreme AI discourse.
2. The 2023 recap provides context for what happened a year later. Seeing how AI doomers took it too far in 2023 gives a better understanding of why it backfired in 2024.
2023’s AI panic
At the end of 2022, ChatGPT took the world by storm. It sparked the “Generative AI” arms race. Shortly thereafter, we were bombarded with doomsday scenarios of an AI takeover, an AI apocalypse, and “The END of Humanity.” The “AI Existential Risk” (x-risk) movement has gradually, then suddenly, moved from the fringe to the mainstream. Apart from becoming media stars, its members also influenced Congress and the EU. They didn’t shift the Overton window; they shattered it.
“2023: The Year of AI Panic” summarized the key moments: The two “Existential Risk” open letters (first by the Future of Life Institute and second by the Center for AI Safety), the AI Dilemma and Tristan Harris’ x-risk advocacy (now known to be funded, in part, by the Future of Life Institute), the flood of doomsaying in traditional media, followed by numerous AI policy proposals that focus on existential threats and seek to surveil and criminalize AI development. Oh, and TIME magazine had a full-blown love affair with AI doomers (it still has).
– AI Panic Agents
Throughout the years, Eliezer Yudkowsky from Berkeley’s MIRI (Machine Intelligence Research Institute) and his “End of the World” beliefs heavily influenced a sub-culture of “rationalists” and AI doomers. In 2023, they embarked on a policy and media tour.
In a TED talk, “Will Superintelligent AI End the World?” Eliezer Yudkowsky said, “I expect an actually smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably and then kill us […] It could kill us because it doesn’t want us making other superintelligences to compete with it. It could kill us because it’s using up all the chemical energy on earth, and we contain some chemical potential energy.” In TIME magazine, he advocated to “Shut it All Down“: “Shut down all the large GPU clusters. Shut down all the large training runs. Be willing to destroy a rogue datacenter by airstrike.”
Max Tegmark from the Future of Life Institute said: “There won’t be any humans on the planet in the not-too-distant future. This is the kind of cancer that kills all of humanity.”
Next thing you know, he was addressing the U.S. Congress at the “AI Insight Forum.”
And successfully pushing the EU to include “General-Purpose AI systems” in the “AI Act” (discussed further in the 2024 recap).
Connor Leahy from Conjecture said: “I do not expect us to make it out of this century alive. I’m not even sure we’ll get out of this decade!”
Next thing you know, he appeared on CNN and later tweeted: “I had a great time addressing the House of Lords about extinction risk from AGI.” He suggested “a cap on computing power” at 10^24 FLOPs (Floating Point Operations) and a global AI “kill switch.”
Dan Hendrycks from the Center for AI Safety expressed an 80% probability of doom and claimed, “Evolutionary pressure will likely ingrain AIs with behaviors that promote self-preservation.”[1] He warned that we are on “a pathway toward being supplanted as the Earth’s dominant species.” Hendrycks also suggested “CERN for AI,” imagining “a big multinational lab that would soak up the bulk of the world’s graphics processing units [GPUs]. That would sideline the big for-profit labs by making it difficult for them to hoard computing resources.” He later speculated that AI regulation in the U.S “might pave the way for some shared international standards that might make China willing to also abide by some of these standards” (because, of course, China will slow down as well… That’s how geopolitics work!).
Next thing you know, he collaborated with Senator Scott Wiener of California to pass an AI Safety bill, SB-1047 (more on this bill in the 2024 recap).
A ”follow the money” investigation revealed it’s not a grassroots, bottom-up movement, but a top-down movement heavily funded by a few Effective Altruism (EA) billionaires, mainly Dustin Moskovitz, Jaan Tallinn, and Sam Bankman-Fried.
The 2023 recap ended with this paragraph: “In 2023, EA-backed ‘AI x-risk’ took over the AI industry, AI media coverage, and AI regulation. Nowadays, more and more information is coming out about the ‘influence operation’ and its impact on AI policy. See, for example, the reporting on Rishi Sunak’s AI agenda and Joe Biden’s AI order. In 2024, this tech billionaires-backed influence campaign may backfire. Hopefully, a more significant reckoning will follow.”
2024: Act 1. The AI panic further flooded the zone
With 1.6 billion dollars from the Effective Altruism movement,[2] the “AI Existential Risk” ecosystem has grown to hundreds of organizations.[3] In 2024, their policy advocacy became more authoritarian.
- The Center for AI Policy (CAIP) outlined the goal: to “establish a strict licensing regime, clamp down on open-source models, and impose civil and criminal liability on developers.”
- The “Narrow Path” proposal started with “AI poses extinction risks to human existence” (according to an accompanying report, The Compendium, “By default, God-like AI leads to extinction”). Instead of asking for a six-month AI pause, this proposal asked for a 20-year pause. Why? Because “two decades provide the minimum time frame to construct our defenses.”
Note that these “AI x-risk” groups sought to ban currently existing AI models.
- The Future of Life Institute proposed stringent regulation on models with a compute threshold of 10^25 FLOPs, explaining it “would apply to fewer than 10 current systems.”
- The International Center for Future Generations (ICFG) proposed that “open-sourcing of advanced AI models trained on 10^25 FLOP or more should be prohibited.”
- Gladstone AI‘s “Action Plan”[4] claimed that these models “are considered dangerous until proven safe” and that releasing them “could be grounds for criminal sanctions including jail time for the individuals responsible.”
- Beforehand, the Center for AI Safety (CAIS) proposed to ban open-source models trained beyond 10^23 FLOPs.
Llama 2 was trained with > 10^23 FLOPs and thus would have been banned.
- The AI Safety Treaty and the Campaign for AI Safety wrote similar proposals, the latter spelling it out as “Prohibiting the development of models above the level of OpenAI GPT-3.”
- Jeffrey Ladish from Palisade Research (also from the Center for Humane Technology and CAIP) said, “We can prevent the release of a LLaMA 2! We need government action on this asap.” Siméon Campos from SaferAI set the threshold on Llama-1.
All those proposed prohibitions claimed that past thresholds would bring DOOM.
It was ridiculous back then; it looks more ridiculous now.
“It’s always just a bit higher than where we are today,” venture capitalist Krishnan Rohit commented. “Imagine if we had done this!!”
In a report entitled “What mistakes has the AI safety movement made?”, it was argued that “AI safety is too structurally power-seeking: trying to raise lots of money, trying to gain influence in corporations and governments, trying to control the way AI values are shaped, favoring people who are concerned about AI risk for jobs and grants, maintaining the secrecy of information, and recruiting high school students to the cause.”
YouTube is flooded with prophecies of AI doom, some of which target children. Among the channels tailored for kids are Kurzgesagt and Rational Animations, both funded by Open Philanthropy.[5] These videos serve a specific purpose, Rational Animations admitted: “In my most recent communications with Open Phil, we discussed the fact that a YouTube video aimed at educating on a particular topic would be more effective if viewers had an easy way to fall into an ‘intellectual rabbit hole’ to learn more.”
“AI Doomerism is becoming a big problem, and it’s well funded,” observed Tobi Lutke, Shopify CEO. “Like all cults, it’s recruiting.”
Also, like in other doomsday cults, the stress of believing an apocalypse is imminent wears down the ability to cope with anything else. Some are getting radicalized to a dangerous level, playing with the idea of killing AI developers (if that’s what it takes to “save humanity” from extinction).
Both PauseAI and StopAI stated that they are non-violent movements that do not permit “even joking about violence.” That’s a necessary clarification for their various followers. There is, however, a need for stronger condemnation. The murder of the UHC CEO showed us that it only takes one brainwashed individual to cross the line.
2024: Act 2. The AI panic started to backlash
In 2024, AI panic reached the point of practicality and began to backfire.
– The EU AI Act as a cautionary tale
In December 2023, European Union (EU) negotiators struck a deal on the most comprehensive AI rules, the “AI Act.” “Deal!” tweeted European Commissioner Thierry Breton, celebrating how “The EU becomes the very first continent to set clear rules for the use of AI.”
Eight months later, a Bloomberg article discussed how the new AI rules “risk entrenching the transatlantic tech divide rather than narrowing it.”
Gabriele Mazzini, the EU AI Act Architect, and lead author, expressed regret and admitted that its reach has ended up being too broad: “The regulatory bar maybe has been set too high. There may be companies in Europe that could just say there isn’t enough legal certainty in the AI Act to proceed.”
In September, the EU released “The Future of European Competitiveness” report. In it, Mario Draghi, former President of the European Central Bank and former Prime Minister of Italy, expressed a similar observation: “Regulatory barriers to scaling up are particularly onerous in the tech sector, especially for young companies.”
In December, there were additional indications of a growing problem.
1. When OpenAI released Sora, its video generator, Sam Altman reacted about being unable to operate in Europe: “We want to offer our products in Europe … We also have to comply with regulation.”[6]
2. “A Visualization of Europe’s Non-Bubbly Economy” by Andrew McAfee from MIT Sloan School of Management exploded online as hammering the EU became a daily habit.
These examples are relevant to the U.S., as California introduced its own attempt to mimic the EU when Sacramento emerged as America’s Brussels.
– California’s bill SB-1047 as another cautionary tale
Senator Scott Wiener’s SB-1047 was supported by EA-backed AI safety groups. The bill included strict developer liability provisions, and AI experts from academia and entrepreneurs from startups (“little tech”) were caught off guard. It built a coalition against the bill. The headline collage below illustrates the criticism of the bill as it would strangle innovation, AI R&D (Research and Development), and the open-source community in California and around the world.
The bill was eventually rejected by Gavin Newsom’s veto. The governor explained that there’s a need for an evidence-based, workable regulation.
You’ve probably spotted the pattern by now. 1. Doomers scare the hell out of people. 2. It supports their call for a strict regulatory regime. 3. Those who listen to their fearmongering regret it.
Why? Because 1. Doomsday ideology is extreme. 2. The bills are vaguely written. 3. They don’t consider tradeoffs.
2025
– The vibe shift in Washington
The new administration seems less inclined to listen to AI doomsaying.
Donald Trump’s top picks for relevant positions prioritize American dynamism.
The Bipartisan House Task Force on Artificial Intelligence has just released an AI policy report stating, “Small businesses face excessive challenges in meeting AI regulatory compliance,” “There is currently limited evidence that open models should be restricted,” and “Congress should not seek to impose undue burdens on developers in the absence of clear, demonstrable risk.”
There will probably be a fight at the state level, and if SB-1047 is any indication, it will be intense.
– Is the AI panic going to be further backlashed?
This panic cycle is not yet at the point of reckoning. But eventually, society will need to confront how the extreme ideology of “AI will kill us all” became so influential in the first place.
——————————-
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.
——————————-
Endnotes
- Dan Hendryck’s tweet and Arvind Narayanan and Sayash Kapoor’s article in “AI Snake Oil”: “AI existential risk probabilities are too unreliable to inform policy.” The similarities = a coincidence ↑
- This estimation includes the revelation that Tegmark’s Future of Life Institute was no longer a $2.4-million organization but a $674-million organization. It managed to convert a cryptocurrency donation (Shiba Inu tokens) to $665 million (using FTX/Alameda Research). Through its new initiative, the Future of Life Foundation (FLF), FLI aims “to help start 3 to 5 new organizations per year.”This new visualization of Open Philanthropy’s funding shows that the existential risk ecosystem (“Potential Risks from Advanced AI” + “Global Catastrophic Risks” + “Global Catastrophic Risks Capacity Building,” different names to funding Effective Altruism AI Safety organizations/groups) has received ~ $780 million (instead of $735 million in the previous calculation). ↑
- The recruitment in elite universities can be described as “bait-and-switch”: From Global Poverty to AI Doomerism. The “Funnel Mode” is basically, “Come to save the poor or animals; stay to prevent Skynet.” ↑
- The U.S. government had funded Gladstone AI’s report as part of a federal contract worth $250,000. ↑
- Kurzgesagt got $7,533,224 from Open Philanthropy and Rational Animations got $4,265,355. Sam Bankman-Fried planned to add $400,000 to Rational Animations but was convicted of seven fraud charges for stealing $10 billion from customers and investors in “one of the largest financial frauds of all time.” ↑
- Altman was probably referring to a mixed salad of the new AI Act with previous regulations like GDPR (General Data Protection Regulation) and DMA (Digital Markets Act). ↑