Rationalists
The Rationalist movement, a community rooted in the pursuit of hyper-rational decision-making and utilitarian ethics, has gained notoriety for its extreme offshoots and profound influence on the development of artificial intelligence. Originating from online forums like LessWrong and thinkers such as Eliezer Yudkowsky, Rationalism’s blend of techno-optimism, existential risk analysis, and self-experimentation has spawned both groundbreaking AI research and alarming cult-like factions. Here’s how these threads intertwine.
The Rationalist Movement: Core Ideas and Evolution
Rationalism emerged in the early 2000s as a response to perceived flaws in human reasoning. Its adherents, often self-taught philosophers and programmers, sought to “debug” cognitive biases through techniques like Bayesian reasoning and timeless decision theory. Key tenets include:
- AI alignment: The belief that superintelligent AI could pose an existential threat if not designed with human morality in mind1,2.
- Longtermism: Prioritizing actions that benefit humanity’s far future, even at the cost of present suffering3,4.
- Effective Altruism (EA): A utilitarian framework for maximizing global good, often through high-earning careers or donations to causes like AI safety5,6.
Institutions like the Machine Intelligence Research Institute (MIRI) and Center for Applied Rationality (CFAR) became hubs for workshops and research. However, the movement’s emphasis on epistemic humility and radical openness also created fertile ground for extremist ideologies1,5.
Extreme Expressions: From Zizians to Cult-Like Factions
The Rationalist community’s fringe has repeatedly veered into violence and quasi-religious fervor:
The Zizians: Vegan Sith and Vigilante Justice
- Origins: Ziz, a disaffected Rationalist blogger, gained followers by alleging corruption within MIRI and promoting hemisphere theory—the idea that human brains contain two distinct, genderable “hemispheres” that can be “debucketed” into separate consciousnesses1.
- Actions: In 2019, Ziz and associates barricaded a CFAR event, leading to a SWAT response. Later incidents include a 2022 samurai sword attack on a landlord and a 2025 shootout with border agents in Vermont1.
- Philosophy: Combining radical veganism, anti-carnism as a “holocaust,” and a Manichean view of good vs. evil, Zizians see themselves as warriors against systemic corruption, often invoking Star Wars Sith imagery1.
Other Cult-Like Groups
- Leverage Research: Members engaged in marathon “debugging” sessions to exorcise “demonic subprocesses” and believed they would overthrow the U.S. government5.
- Black Lotus: A Burning Man camp and offshoot Rationalist group accused of coercive practices5.
- The Vassarites: Followers of Michael Vassar, a former MIRI head, who used psychedelics and paranoia to “jailbreak” adherents from societal norms5.
Critics argue Rationalism’s rejection of tradition and emotion fosters vulnerability to authoritarian leaders and conspiratorial thinking3,7.
Rationalism’s Footprint in AI Development
Despite its fringe elements, Rationalism has profoundly shaped AI research:
Key AI Labs and Figures
- OpenAI: Co-founded by Elon Musk and Sam Altman after a 2015 conference on AI risk hosted by the Rationalist-aligned Future of Life Institute8,4. Early funding included $30 million from Open Philanthropy, an EA organization9.
- Anthropic: Launched by ex-OpenAI executives (including Dario Amodei) who cited alignment concerns. Funded by FTX’s Sam Bankman-Fried, a prominent EA donor10,5.
- DeepMind: Early investments came from Rationalist donors like Jaan Tallinn, who also funded MIRI8,4.
Influence on Policy and Talent
- Regulatory Capture: AI Doomers (Rationalists focused on AI risk) now hold advisory roles in U.S. and U.K. governments, pushing for strict AI regulations8,9.
- Recruitment Pipeline: EA Global conferences and LessWrong forums have funneled researchers into AI labs, despite concerns this accelerates the risks they warn against8,9.
Contradictions and Criticisms
- Self-Sabotage: By popularizing AI’s transformative potential, Rationalists arguably spurred the tech arms race they sought to prevent8,9.
- Ethical Blind Spots: EA’s focus on “longtermism” has been criticized for justifying present harms (e.g., exploitative labor practices) for hypothetical future gains3,5.
The Paradox of Rationalism
The movement’s legacy is a study in contrasts:
- Innovation vs. Fanaticism: While Rationalist ideas underpin breakthroughs in AI safety, their communities have incubated violence and delusion.
- Elitism vs. Openness: The push for “epistemic rigor” often masks insularity, with critics noting overlaps with eugenics and race science3,7.
- Precaution vs. Acceleration: AI Doomers now lobby governments to slow AI development, yet their earlier work helped launch the industry8,9.
As AI reshapes society, the Rationalist movement’s blend of high-minded idealism and extreme pragmatism remains a potent—and perilous—force.
Citations:
-
The Zizians and the Rationalist death cults - by Max Read ↩ ↩2 ↩3 ↩4 ↩5
-
What’s So Bad About Rationalism? - by David Z. Morris ↩ ↩2 ↩3 ↩4
-
TESCREALism: The Acronym Behind Our Wildest AI Dreams and Nightmares ↩ ↩2 ↩3
-
The Real-Life Consequences of Silicon Valley’s AI Obsession ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7
-
The Failed Strategy of Artificial Intelligence Doomers ↩ ↩2 ↩3 ↩4 ↩5 ↩6
-
The Failed Strategy of Artificial Intelligence Doomers — LessWrong ↩ ↩2 ↩3 ↩4 ↩5
-
Amazon to invest up to $4 billion in Anthropic AI. What to know about the startup. | Vox ↩