Radicalized Anti-AI Activist Ought to Be A Wake Up Name For Doomer Rhetoric

Editorial Team
20 Min Read


from the stay-connected-to-reality dept

A cofounder of a Bay Space “Cease AI” activist group deserted its dedication to nonviolence, assaulted one other member, and made statements that left the group apprehensive he would possibly get hold of a weapon to make use of in opposition to AI researchers. The threats prompted OpenAI to lock down its San Francisco workplaces a couple of weeks in the past. In researching this motion, I got here throughout statements that he made about how nearly any actions he took have been justifiable, since he believed OpenAI was going to “kill everybody and each dwelling factor on earth.” These are detailed beneath.

I feel it’s price exploring the radicalization course of and the broader context of AI Doomerism. We have to confront the social dynamics that flip summary fears of expertise into real-world threats in opposition to the folks constructing it.

OpenAI’s San Francisco Workplaces Lockdown

On November 21, 2025, Wired reported that OpenAI’s San Francisco workplaces went into lockdown after an inner alert a couple of “Cease AI” activist. The activist allegedly expressed curiosity in “inflicting bodily hurt to OpenAI workers” and will have tried to amass weapons.

The article didn’t point out his identify however hinted that, earlier than his disappearance, he had acknowledged he was “not half of Cease AI.”1 On November 22, 2025, the activist group’s Twitter account posted that it was Sam Kirchner, the cofounder of “Cease AI.”

In response to Wired’s reporting

A high-ranking member of the worldwide safety workforce mentioned [in OpenAI Slack] “Right now, there is no such thing as a indication of energetic menace exercise, the scenario stays ongoing and we’re taking measured precautions because the evaluation continues.” Staff have been informed to take away their badges when exiting the constructing and to keep away from carrying clothes gadgets with the OpenAI emblem.

“Cease AI” supplied extra particulars on the occasions resulting in OpenAI’s lockdown:

Earlier this week, one among our members, Sam Kirchner, betrayed our core values by assaulting one other member who refused to provide him entry to funds. His unstable, erratic conduct and statements he made renouncing nonviolence prompted the sufferer of his assault to worry that he would possibly procure a weapon that he might use in opposition to workers of firms pursuing synthetic superintelligence.

We prevented him from accessing the funds, knowledgeable the police about our issues relating to the potential hazard to AI builders, and expelled him from Cease AI. We disavow his actions within the strongest attainable phrases.

Later within the day of the assault, we met with Sam; he accepted accountability and agreed to publicly acknowledge his actions. We have been involved with him as not too long ago because the night of Thursday Nov twentieth. We didn’t consider he posed an instantaneous menace, or that he possessed a weapon or the means to amass one.

Nonetheless, on the morning of Friday Nov twenty first, we discovered his residence in West Oakland unlocked and no signal of him. His present whereabouts and intentions are unknown to us; nonetheless, we’re involved Sam Kirchner could also be a hazard to himself or others. We’re unaware of any particular menace that has been issued.

Now we have taken steps to inform safety on the main US companies growing synthetic superintelligence. We’re issuing this public assertion to tell some other doubtlessly affected events.”

A “Cease AI” activist named Remmelt Ellen wrote that Sam Kirchner “left each his laptop computer and cellphone behind and the door unlocked.” “I hope he’s alive,” he added.

Early December, the SF Customary reported that the “cops [are] nonetheless looking for ‘unstable’ activist whose dying threats shut down OpenAI workplace.” Per this protection, the San Francisco police are warning that he may very well be armed and harmful. “He threatened to go to a number of OpenAI workplaces in San Francisco to ‘homicide folks,’ in keeping with callers who notified police that day.”

A Bench Warrant for Kirchner’s Arrest

After I looked for any info that had not been reported earlier than, I discovered a revealing press launch. It invited the press to a press convention on the morning of Kirchner’s disappearance:

“Cease AI Defendants Converse Out Previous to Their Trial for Blocking Doorways of Open AI.”

When: November 21, 2025, 8:00 AM.

The place: Steps in entrance of the courthouse (San Francisco Superior Court docket).

Who: Cease AI defendants (Sam Kirchner, Wynd Kaufmyn, and Guido Reichstadter), their legal professionals, and AI specialists.

Sam Kirchner is quoted as saying, “We’re performing on our authorized and ethical obligation to cease OpenAI from growing Synthetic Superintelligence, which is equal to permitting the homicide [of] folks I really like in addition to everybody else on earth.”

Evidently, issues didn’t go as deliberate. That Friday morning, Sam Kirchner went lacking, triggering the OpenAI lockdown.

Later, the SF Customary confirmed the trial angle of this story: “Kirchner was not current for a Nov. 21 court docket listening to, and a choose issued a bench warrant for his arrest.”

“Cease AI” – a Bay Space-Centered “Civil Resistance” Group

“Cease AI” calls itself a “non-violent civil resistance group” or a “non-violent activist group.” The group’s focus is on stopping AI improvement, particularly the race to AGI (Synthetic Normal Intelligence) and “Superintelligence.” Their worldview is extraordinarily doom-heavy, and their slogans embrace: “AI Will Kill Us All,” “Cease AI or We’re All Gonna Die,” and “Shut OpenAI or We’re All Gonna Die!”

In response to a “Why Cease AI is barricading OpenAI” publish on the LessWrong discussion board from October 2024, the group is impressed by local weather teams like Simply Cease Oil and Extinction Rise up, however centered on “AI extinction danger,” or of their phrases, “danger of extinction.” Sam Kirchner defined in an interview: “Our major concern is extinction. It’s the first emotional factor driving us: stopping our family members, and all of humanity, from dying.”

In contrast to the remainder of the “AI existential danger” ecosystem, which is usually well-funded by efficient altruism billionaires equivalent to Dustin Moskovitz (Coefficient Giving, previously Open Philanthropy) and Jaan Tallinn (Survival and Flourishing Fund), this particular group will not be a proper nonprofit or funded NGO, however reasonably a loosely organized grassroots group of volunteer-run activism. They made their monetary scenario fairly clear when the “Cease AI” Twitter account replied to a query with: “We’re fucking poor, you dumb bitch.”2

In response to The Register, “STOP AI has 4 full-time members in the meanwhile (in Oakland) and about 15 or so volunteers within the San Francisco Bay Space who assist out part-time.”

Since its inception, “Cease AI” has had two central organizers: Guido Reichstadter and Sam Kirchner (the present fugitive). In response to The Register and the Bay Space Present, Guido Reichstadter has labored as a jeweler for 20 years. He has an undergraduate diploma in physics and math. Reichstadter’s prior actions embrace local weather change and abortion-rights activism. 

In June 2022, Reichstadter climbed the Frederick Douglass Memorial Bridge in Washington, D.C., to protest the Supreme Court docket’s resolution overturning Roe v. Wade. Per the information protection, he mentioned, “It’s time to cease the machine.” “Reichstadter hopes the stunt will encourage civil disobedience nationwide in response to the Supreme Court docket’s ruling.”

Reichstadter moved to the Bay Space from Florida round 2024 explicitly to prepare civil disobedience in opposition to AGI improvement by way of “Cease AI.” Lately, he undertook a starvation strike exterior Anthropic’s San Francisco workplace for 30 days.

Sam Kirchner labored as a DoorDash driver and, earlier than that, as {an electrical} technician. He has a background in mechanical and electrical engineering. He moved to San Francisco from Seattle, cofounded “Cease AI,” and “stayed in a homeless shelter for 4 months.”

AI Doomerism’s Rhetoric

The group’s rationale included this declare (revealed on their account on August 29, 2025): “Humanity is strolling off a cliff,” with AGI resulting in “ASI overlaying the earth in datacenters.” 

As 1a3orn identified, the unique “Cease AI” web site mentioned we risked “recursive self-improvement” and doom from any AI fashions skilled with greater than 10^23 FLOPs. (The group dropped this prediction sooner or later) Later, in a (now deleted) “Cease AI Proposal,” the group requested to “Completely ban ANNs (Synthetic Neural Networks) on any laptop above 10^25 FLOPS. Violations of the fast 10^25 ANN FLOPS cap can be punishable by life in jail.” 

To be clear, tens of present AI fashions have been skilled with over 10^25 FLOPs.

In a “For Humanity” podcast episode with Sam Kirchner, “Go to Jail to Cease AI” (episode #49, October 14, 2024), he mentioned: “We don’t actually care about our prison information as a result of if we’re going to be useless right here fairly quickly or if we hand over management which can guarantee our future extinction right here in a couple of years, your prison document doesn’t matter.” 

The podcast promoted this episode in a (now deleted) tweet, quoting Kirchner: “I’m keen to DIE for this.” “I need to discover an aggressive prosecutor on the market who desires to cost OpenAI executives with tried homicide of eight billion folks. Sure. Actually, why not? Yeah, straight up. Straight up. What I need to do is get on the information.”

After Kirchner’s disappearance, the podcast host and founding father of “GuardRailNow” and the “AI Threat Community,” John Sherman, deleted this episode from podcast platforms (Apple, Spotify) and YouTube. Previous to its removing, I downloaded the video (size 01:14:14).

Sherman additionally produced an emotional documentary with “Cease AI” titled “Close to Midnight in Suicide Metropolis” (December 5, 2024, episode #55. See its trailer and promotion on the Efficient Altruism Discussion board). It’s now faraway from podcast platforms and YouTube, although I’ve a replica in my archive (size 1:29:51). It gathered 60k views earlier than its removing.

The group’s radical rhetoric was out within the open. “If AGI builders have been handled with cheap precaution proportional to the hazard they’re cognizantly inserting humanity in by their venal and reckless actions, many would have a bullet put via their head,” wrote Guido Reichstadter in September 2024. 

The above screenshot appeared in a Techdirt piece, “2024: AI Panic Flooded the Zone Resulting in a Backlash.” The warning indicators have been there:

Additionally, like in different doomsday cults, the stress of believing an apocalypse is imminent wears down the power to deal with the rest. Some are getting radicalized to a harmful degree, enjoying with the concept of killing AI builders (if that’s what it takes to “save humanity” from extinction).

Each PauseAI and StopAI acknowledged that they’re non-violent actions that don’t allow “even joking about violence.” That’s a needed clarification for his or her numerous followers. There may be, nonetheless, a necessity for stronger condemnation. The homicide of the UHC CEO confirmed us that it solely takes one brainwashed particular person to cross the road.

In early December 2024, I expressed my concern on Twitter: “Is the StopAI motion creating the subsequent Unabomber?” The screenshot of “Getting arrested is nothing if we’re all gonna die” was taken from Sam Kirchner.

Focusing on OpenAI

The principle goal of their civil-disobedience-style actions was OpenAI. The group defined that their “actions in opposition to OpenAI have been an try to sluggish OpenAI down of their tried homicide of everybody and each dwelling factor on earth.” In a tweet selling the October blockade, Guido Reichstadter claimed about OpenAI: “These folks need to see you useless.”

“My co-organizers Sam and Guido are keen to place their physique on the road by getting arrested repeatedly,” mentioned Remmelt Ellen. “We’re that severe about stopping AI improvement.”

On January 6, 2025, Kirchner and Reichstadter went on trial for blocking the entrance to OpenAI on October 21, 2024, to “cease AI earlier than AI cease us” and on September 24, 2024 (“prison document doesn’t matter if we’re all useless”), in addition to blocking the street in entrance of OpenAI on September 12, 2024.

The “Cease AI” occasion web page on Luma listing additional protests in entrance of OpenAI: on January 10, 2025; April 18, 2025; Might 23, 2025 (protection); July 25, 2025; and October 24, 2025. On March 2, 2025, they’d a protest in opposition to Waymo.

On February 22, 2025, three “Cease AI” protesters have been arrested for trespassing after barricading the doorways to the OpenAI workplaces and allegedly refusing to depart the corporate’s property. It was coated by a native TV station. Golden Gate Xpress documented the activists detained within the police van: Jacob Freeman, Derek Allen, and Guido Reichstadter. Officers pulled out bolt cutters and reduce the lock and chains on the entrance doorways. In a Bay Space Present article, “Why Bay Space Group Cease AI Thinks Synthetic Intelligence Will Kill Us All,” Kirchner is quoted as saying, “The work of the scientists current” is “placing my household in danger.”

October 20, 2025 was the primary day of the jury trial of Sam Kirchner, Guido Reichstadter, Derek Allen, and Wynd Kaufmyn.

On November 3, 2025, “Cease AI”’s public defender served OpenAI CEO Sam Altman with a subpoena at a talking occasion on the Sydney Goldstein Theater in San Francisco. The group claimed accountability for the onstage interruption, saying the purpose was to immediate the jury to ask Altman “concerning the extinction menace that AI poses to humanity.”

Public Messages to Sam Kirchner

“Cease AI” acknowledged it’s “deeply dedicated to nonviolence“ and “We want no hurt on anybody, together with the folks growing synthetic superintelligence.” In a separate tweet, “Cease AI” wrote to Sam: “Please tell us you’re okay. So far as we all know, you haven’t but crossed a line you possibly can’t come again from.”

John Sherman, the “AI Threat Community” CEO, pleaded, “Sam, don’t do something violent. Please. You recognize this isn’t the way in which […] Please don’t, for any purpose, attempt to use violence to attempt to make the world safer from AI danger. It will fail miserably, with horrible penalties for the motion.”

Rhetoric’s Ramifications

Taken collectively, the “imminent doom” rhetoric fosters circumstances during which weak people may very well be dangerously radicalized, echoing the dynamics seen in previous apocalyptic actions.  

In “A Cofounder’s Disappearance—and the Warning Indicators of Radicalization”, Metropolis Journal summarized: “We should always keep alert to the warning indicators of radicalization: a disaffected younger particular person, consumed by summary dangers, satisfied of his personal righteousness, and embedded in a group that retains ratcheting up the ethical stakes.”

“The Rationality Entice – Why Are There So Many Rationalist Cults?” described this actual radicalization course of, noting how the extra excessive figures (e.g., Eliezer Yudkowsky)3 set the stakes and tone: “Apocalyptic consequentialism, pushing the group to undertake AI Doomerism because the baseline, and perceived urgency because the lever. The world-ending stakes accelerated the ‘ends-justify-the-means’ reasoning.”

We have already got a Doomers “homicide cult” referred to as the Zizians and their story is far more weird than something you’ve learn right here. Like, awfully extra excessive. And, hopefully, such issues ought to stay uncommon.

What we must always focus on is the risks of such an excessive (and deceptive) AI discourse. If human extinction from AI is simply across the nook, primarily based on the Doomers’ logic, all their recommendations are “extraordinarily small sacrifices to make.” Sadly, the scenario we’re in is: “Imagined dystopian fears have was actual dystopian ‘options.’

That is nonetheless an evolving scenario. As of this writing, Kirchner’s whereabouts stay unknown.

—————————

Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and creator of “The TECHLASH and Tech Disaster Communication” ebook and the “AI Panic” e-newsletter.

—————————

Endnotes

Filed Underneath: activism, ai, ai doomerism, doomers, generative ai, guido reichstadter, remmelt ellen, sam kirchner, threats

Corporations: openai, stopai

Share This Article