8.5 C
United Kingdom
Monday, November 10, 2025

You don’t hate AI because of genuine dislike. No, there’s a $1 billion plot by the ‘Doomer Industrial Complex’ to brainwash you, Trump’s AI czar says



That disconnect, David Sacks insists, isn’t because AI threatens your job, privacy and the future of the economy itself. No – according to the venture-capitalist-turned-Trump-advisor, it’s all part of a $1 billion plot by what he calls the “Doomer Industrial Complex,” a shadow network of Effective Altruist billionaires bankrolled by the likes of convicted FTX founder Sam Bankman Fried  and Facebook co-founder Dustin Moskovitz. 

In an X post this week, Sacks argued that public distrust of AI isn’t organic at all — it’s manufactured. He pointed to research by tech-culture scholar Nirit Weiss-Blatt, who has spent years mapping the “AI doom” ecosystem of think tanks, nonprofits, and futurists.

Weiss-Blatt documents hundreds of groups that promote strict regulation or even moratoriums on advanced AI systems. She argues that much of the money behind those organizations can be traced to a small circle of donors in the Effective Altruism movement, including Facebook co-founder Dustin Moskovitz, Skype’s Jaan Tallinn, Ethereum creator Vitalik Buterin, and convicted FTX founder Sam Bankman-Fried.

According to Weiss-Blatt, those philanthropists have collectively poured more than $1 billion into efforts to study or mitigate “existential risk” from AI. However, she pointed at Moskovitz’s organization, Open Philanthropy, as “by far” the largest donors. 

The organization pushed back strongly on the idea that they were projecting sci-fi-esque doom and gloom scenarios.

“We believe that technology and scientific progress have drastically improved human well-being, which is why so much of our work focuses on these areas,” an Open Philanthropy spokesperson told Fortune. “AI has enormous potential to accelerate science, fuel economic growth, and expand human knowledge, but it also poses some unprecedented risks — a view shared by leaders across the political spectrum. We support thoughtful nonpartisan work to help manage those risks and realize the huge potential upsides of AI.”

But Sacks, who has close ties to Silicon Valley’s venture community and served as an early executive at PayPal, claims that funding from Open Philanthropy has done more than just warn of the risks– it’s bought a global PR campaign warning of “Godlike” AI. He cited polling showing that 83% of respondents in China view AI’s benefits as outweighing its harms — compared with just 39% in the United States — as evidence that what he calls “propaganda money” has reshaped the American debate.

Sacks has long pushed for an industry-friendly, no regulation approach to AI –and technology broadly—framed in the race to beat China. 

Sacks’ venture capital firm, Craft Ventures, did not immediately respond to a request for comment.

What is Effective Altruism?

The “propaganda money” Sacks refers to comes largely from the Effective Altruism (EA) community, a wonky group of idealists, philosophers, and tech billionaires who believe humanity’s biggest moral duty is to prevent future catastrophes, including rogue AI.

The EA movement, founded a decade ago by Oxford philosophers William MacAskill and Toby Ord, encourages donors to use data and reason to do the most good possible. 

That framework led some members to focus on “longtermism,” the idea that preventing existential risks such as pandemics, nuclear war, or rogue AI should take priority over short-term causes.

While some EA-aligned organizations advocate heavy AI regulation or even “pauses” in model development, others – like Open Philanthropy– take a more technical approach, funding alignment research at companies like OpenAI and Anthropic. The movement’s influence grew rapidly before the 2022 collapse of FTX, whose founder Bankman-Fried had been one of EA’s biggest benefactors.

Matthew Adelstein, a 21-year-old college student who has a prominent Substack on EA, notes that the landscape is far from the monolithic machine that Sacks describes. Weiss-Blatt’s own map of the “AI existential risk ecosystem” includes hundreds of separate entities — from university labs to nonprofits and blogs — that share similar language but not necessarily coordination. Yet, Weiss-Blatt deduces that though the “inflated ecosystem” is not “a grassroots movement. It’s a top down one.” 

Adelstein disagrees, noting that the reality is “more fragmented and less sinister” than Weiss-Blatt and Sacks portrays.

“Most of the fears people have about AI are not the ones the billionaires talk about,” Adelstein told Fortune. “People are worried about cheating, bias, job loss — immediate harms — rather than existential risk.”

He argues that pointing to wealthy donors misses the point entirely. 

“There are very serious risks from artificial intelligence,” he said. “Even AI developers think there’s a few-percent chance it could cause human extinction. The fact that some wealthy people agree that’s a serious risk isn’t an argument against it.”

To Adelstein, longtermism isn’t a cultish obsession with far-off futures but a pragmatic framework for triaging global risks. 

“We’re developing very advanced AI, facing serious nuclear and bio-risks, and the world isn’t prepared,” he said. “Longtermism just says we should do more to prevent those.”

He also brushed off accusations that EA has turned into a quasi-religious movement.

 “I’d like to see the cult that’s dedicated to doing altruism effectively and saving 50,000 lives a year,” he said with a laugh. “That would be some cult.”

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles