Effective Altruism

This is a primer on Effective Altruism.

Effective Altruism began as a question about mosquito nets and ended as an argument for controlling the future of artificial intelligence. The story of how a student pledge to donate 10% of one’s income to fight malaria became a multi-billion-dollar apparatus for steering AI governance is the story of a moral philosophy that consumed itself. The movement that insisted on evidence above all else concentrated its funding in a single crypto exchange. The community that championed rational calculation about where a dollar does the most good concluded that the answer was: not on people alive today. Understanding this trajectory matters because the institutions EA built still channel enormous resources, its alumni still occupy positions in AI labs and government agencies, and its intellectual framework continues to shape how powerful people think about the future.


Table of Contents

What is Effective Altruism?

Effective Altruism (EA) is a philosophical and social movement that applies evidence-based reasoning to the question of how to do the most good. Founded in the late 2000s by Oxford philosophers and popularized through a network of nonprofits, pledges, and career-advisory organizations, EA asks its adherents to treat charity the way an investor treats capital: where does a dollar produce the greatest measurable return in human welfare?

The premise sounds modest. Give to the charities that save the most lives per dollar spent. Choose careers that maximize your positive impact on the world. Use randomized controlled trials, cost-effectiveness analyses, and expected-value calculations to cut through the sentimentality that typically governs philanthropy. In its earliest form, EA was a corrective to a real problem: billions spent on charities with no evidence they worked, donor decisions driven by emotional marketing rather than outcomes, and an aid industry that resisted measurement.

But a philosophy built on maximizing expected value has no natural stopping point. If you follow the logic far enough, the calculation leads away from present suffering and toward speculative futures. And that is exactly what happened.

The Idea: Singer, Ord, and the Moral Arithmetic of Charity (2006-2013)

The intellectual roots stretch back to 1972, when philosopher Peter Singer published “Famine, Affluence, and Morality” in Philosophy & Public Affairs.1 Singer’s argument rested on a thought experiment that became the movement’s founding parable: if you walked past a shallow pond where a child was drowning, and you could save that child at the cost of ruining your expensive clothes, you would be morally obligated to wade in. Distance, Singer argued, is morally irrelevant. A child dying of malaria in Sub-Saharan Africa is the same drowning child. Your obligation to help is the same. The uncomfortable conclusion: anyone who spends money on luxuries while people die of preventable diseases is making a moral choice equivalent to walking past the pond.

For three decades, this remained a classroom thought experiment. What changed in the late 2000s was a generation of Oxford graduate students and young professionals who decided to operationalize it.

Holden Karnofsky and Elie Hassenfeld, two former analysts at the hedge fund Bridgewater Associates, founded GiveWell in 2007 to do what few in the charity world had seriously attempted: rank charities by cost-effectiveness.2 Their methodology was deliberately cold. How many quality-adjusted life years (QALYs) does a dollar buy? GiveWell’s top recommendations became legendary within the movement: the Against Malaria Foundation, which distributes insecticide-treated bed nets, could save a life for roughly $3,000-5,000. Most charities could not demonstrate they saved any lives at all. The implication was stark. Donating to your local opera house while bed nets remained unfunded was, by GiveWell’s math, a choice to let people die.

In 2009, Oxford moral philosopher Toby Ord co-founded Giving What We Can, a society whose members pledge to donate at least 10% of their lifetime income to the most effective charities.3 Ord himself pledged to give everything he earned above approximately 18,000 pounds. His colleague William MacAskill, born in 1987 and later the youngest associate professor of philosophy at Oxford, became the movement’s most energetic organizer.4 MacAskill co-founded the Centre for Effective Altruism (CEA) in 2011 and 80,000 Hours (named for the approximate number of working hours in a career) around the same time. 80,000 Hours offered evidence-based career advice: instead of becoming a doctor to help people directly, perhaps you should become a quantitative trader and donate your salary. The math, they argued, favored earning more and giving more over direct service.

This was the movement’s first phase. And it had real virtues. GiveWell’s research was genuinely rigorous. The 10% pledge was a concrete commitment, not cheap talk. The bed nets worked. The Against Malaria Foundation distributed over 300 million nets. Children who would have died of malaria survived. The early EA community attracted people who were serious about reducing suffering and impatient with the self-congratulatory rituals of conventional philanthropy.

MacAskill’s 2015 book Doing Good Better crystallized this phase for a popular audience.5 The pitch was accessible: you already want to help, so here’s how to help more effectively. Donate smarter, choose a high-impact career, think about scale. The book read like a practical manual, and it worked. EA grew from a handful of Oxford philosophers to a global network of chapters, conferences, and organizations.

But the seeds of transformation were already planted in the movement’s core logic. If you follow “where does a dollar do the most good?” to its conclusion, the answer depends on your time horizon. A dollar spent preventing malaria saves a life today. A dollar spent preventing human extinction, if you assign even a tiny probability to that event, saves all future lives. The expected-value calculation swallows the present.

The Drift: From Mosquito Nets to Existential Risk (2013-2019)

The pivot began with a dissertation. In 2013, Nick Beckstead completed his doctoral thesis at Rutgers University arguing that the far future’s vast scale makes reducing existential risks morally paramount.6 If humanity survives for millions of years and potentially colonizes the galaxy, the number of future people dwarfs the current population by orders of magnitude. A small reduction in the probability of human extinction, Beckstead argued, outweighs enormous improvements in the welfare of people alive today. The math was straightforward. The implications were not.

This argument became the intellectual foundation of Longtermism, the philosophical position that the most important moral consideration is the long-run future of humanity. Longtermism did not emerge in isolation. Nick Bostrom’s 2014 book Superintelligence gave it a concrete threat to organize around: the prospect that unaligned Artificial General Intelligence (AGI) might destroy humanity.7 The Rationalist community centered on LessWrong and Eliezer Yudkowsky’s Machine Intelligence Research Institute (MIRI) had been developing this fear for over a decade. What Beckstead and the longtermists added was an ethical framework that made AI risk the obvious top priority for anyone who accepted utilitarian premises and long time horizons.

The cause-prioritization framework, EA’s signature intellectual tool, made the shift feel almost inevitable. EA had always asked: which cause area produces the most good per dollar? The framework evaluated causes on three axes: scale (how many people affected), neglectedness (how underfunded the area is), and tractability (how solvable the problem is). Global poverty scored well on scale but poorly on neglectedness (billions already flow into aid). Existential risk from AI scored extraordinarily on scale (all future humans), well on neglectedness (almost no funding in 2013), and plausibly on tractability (a technical problem that might yield to research).

80,000 Hours tracked this shift in real time. The organization’s career recommendations quietly migrated from “earn to give in finance and donate to GiveWell charities” toward “work directly on AI safety research.”8 By the mid-2010s, its list of “most pressing world problems” placed AI risk and biosecurity alongside, and increasingly above, global poverty. Ben Todd, co-founder of 80,000 Hours, became a prominent voice arguing that the expected value of working on existential risk dwarfed anything achievable in conventional development work.

The funding apparatus followed. Open Philanthropy, founded by GiveWell’s Holden Karnofsky and backed by Facebook co-founder Dustin Moskovitz’s fortune through Good Ventures, became EA’s most powerful institution.9 Moskovitz and his wife Cari Tuna channeled hundreds of millions through Open Philanthropy, which split its grantmaking between “GiveWell-style” global health interventions and a growing portfolio of longtermist causes: AI safety research, biosecurity, and “EA community building.” By the late 2010s, Open Philanthropy was granting over $200 million annually. The longtermist share grew steadily.

The drift was not unopposed. Within the EA community, a fault line opened between “neartermists” (focused on global poverty, animal welfare, and present-day suffering) and longtermists. Some of the movement’s original supporters watched with alarm as the community’s energy, talent, and money flowed toward speculative future risks. But the longtermists controlled the philosophical high ground and, increasingly, the funding. The expected-value argument was difficult to defeat on its own terms. If you accepted the premises, existential risk won the argument.

By 2019, the transformation was largely complete. EA’s public face still featured bed nets and deworming. Its institutional center of gravity had shifted to AI safety, biosecurity, and the cause areas that Longtermism privileged. The grassroots ethics movement was becoming something else: a funding and talent pipeline for a specific vision of humanity’s future, one that happened to align neatly with the interests of the Rationalist community and the technology industry.

The Merger: When Three Movements Converged (2019-2022)

By the early 2020s, three movements that had originated separately were converging into a single social and institutional network: Effective Altruism, the Rationalist community, and Longtermism. Timnit Gebru and Emile P. Torres would later place all three within the TESCREAL bundle of interconnected Silicon Valley ideologies.10 The convergence was not merely intellectual. It was financial, social, and organizational.

The social overlap was extensive. EA conferences featured Rationalist speakers. LessWrong posts debated EA cause prioritization. CFAR workshops (the Rationalist community’s training programs) funneled participants toward EA-aligned career paths. The same donors funded both communities. The same Bay Area group houses hosted members of both movements. MacAskill’s 2022 book What We Owe the Future, a philosophical treatise on longtermism, made the merger explicit: the most important thing Effective Altruists could do was ensure that the long-run future goes well, and the biggest threat to that future was unaligned AI.11

“Earning to give” became the merger’s most visible doctrine. The concept, promoted heavily by MacAskill and 80,000 Hours, held that the most effective career path for many people was to earn as much money as possible in high-paying industries and donate the proceeds to high-impact causes. The logic was pure EA: your comparative advantage might not be in direct charity work but in generating resources for those who are. In practice, “earning to give” created a pipeline from elite universities to finance, technology, and cryptocurrency, with the understanding that wealth accumulation was itself a form of altruism.

Sam Bankman-Fried (SBF) was the doctrine’s most prominent practitioner. MacAskill met Bankman-Fried while the latter was an undergraduate at MIT around 2012 and persuaded him that earning to give was more impactful than his initial interest in direct work on animal welfare.12 Bankman-Fried took the Giving What We Can pledge. He worked briefly at the quantitative trading firm Jane Street, reportedly donating half his salary to EA causes, before founding the cryptocurrency trading firm Alameda Research and later the exchange FTX.

By 2022, Bankman-Fried’s estimated fortune stood at $24-26 billion. He had become the single largest financier of the EA and Rationalist ecosystem. Through the FTX Future Fund, established in early 2022 and led by Nick Beckstead, SBF channeled approximately $160 million in grants within its first nine months of operation to over 110 nonprofits, with a significant concentration in longtermist causes.13 The Centre for Effective Altruism received $13.9 million. Longview Philanthropy received $17.9 million. The FTX Future Fund was expected to cover up to 40% of all longtermist EA grants in 2022.

The institutional capture extended beyond funding. Open Philanthropy contributed early funding to OpenAI.14 EA-aligned organizations cultivated relationships with government agencies. Jason Matheny, former director of the Intelligence Advanced Research Projects Activity (IARPA), described how effective altruists could “pick low-hanging fruit within government positions” to exert influence on AI policy.15 Paul Christiano, a prominent figure in the LessWrong/EA orbit, went from the OpenAI Alignment Team to founding the Alignment Research Center to being named Head of AI Safety at the U.S. AI Safety Institute at NIST in 2024.

As James O’Sullivan wrote in Noema, effective altruism “supplied the social infrastructure” for the Rationalist movement’s ideas about superintelligence.14 Its core principle of maximizing long-term good through rational calculation made AI existential risk the obvious top priority. On that logic, hypothetical future lives eclipsed the suffering of people living today. EA provided what the Rationalists needed: money, respectability, institutional access, and a moral framework that made building (or attempting to control) the most powerful AI systems on Earth look like the highest form of altruism.

Gideon Lewis-Kraus, profiling the movement for The New Yorker in 2022, captured the growing skepticism: “Early critics observed that the movement seemed to be in the business of selling philanthropic indulgences for the original sin of privilege.”16 And then, with characteristic precision: “It does, in any case, seem convenient that a group of moral philosophers and computer scientists happened to conclude that the people most likely to safeguard humanity’s future are moral philosophers and computer scientists.”

The Crisis: FTX and the Fraud at the Heart of the Movement (2022-Present)

On November 11, 2022, FTX filed for bankruptcy. Customer funds had been funneled to Alameda Research for risky bets and personal expenditure. Bankman-Fried was convicted on seven counts of fraud and conspiracy and sentenced to 25 years in prison.17 (For the broader impact on the Rationalist ecosystem, see the Rationalists primer.)

The financial damage to EA was immediate. The FTX Future Fund staff resigned. Pledged grants evaporated. The Centre for Effective Altruism lost $13.9 million in pledged funding. Scientists across the ecosystem found their research stranded.18

But the deeper crisis was epistemological. A movement that defined itself by evidence-based reasoning and careful expected-value calculations had concentrated a catastrophic share of its funding in a single actor. The community that prided itself on identifying tail risks had failed to notice the tail risk sitting at the center of its own financial infrastructure. Warning signs about Bankman-Fried’s ethical conduct had circulated as early as 2018: Open Philanthropy staff raised concerns, and leaders including Beckstead were reportedly aware of worrying behavior at Alameda Research.12 The warnings were dismissed as rumor.

The “earning to give” doctrine stood exposed. The intellectual framework that MacAskill and others had promoted as rational altruism had, in its most prominent application, provided moral cover for reckless gambling with other people’s money. The logic was inherently vulnerable to this failure mode: if accumulating wealth is altruistic because you plan to give it away, the boundary between selfless wealth-seeking and ordinary greed depends entirely on the moral character of the individual. EA had built an elaborate philosophical apparatus for evaluating charities while leaving the moral evaluation of its own donors to vibes and social trust.

The movement’s response was defensive and, eventually, adaptive. Open Philanthropy continued operations and remained the dominant funder, granting approximately $650 million in 2024, split roughly between global health (40%), catastrophic risks (20%), community building (20%), and animal welfare (10%).19 The effective giving ecosystem actually grew approximately 10% from 2023 to 2024, reaching roughly $1.2 billion in total annual giving. New funders like Founders Pledge (which grew from $25 million in 2022 to $140 million) diversified the donor base. But the concentration risk that FTX exposed has not been fundamentally addressed. Open Philanthropy and GiveWell still account for roughly 80% of EA-aligned giving.

The reputational damage was harder to repair. SBF’s face had graced magazine covers as the model EA practitioner. His trial became a referendum on the movement itself. The defense that “most EAs are not fraudsters” was true but missed the point: the fraud was not incidental to the movement’s structure. It was produced by the same intellectual framework. A philosophy that instructs you to maximize expected value, that tells you earning more money is equivalent to doing more good, and that treats present moral constraints as potentially outweighed by future outcomes, will reliably attract people who use those arguments to justify whatever they were going to do anyway.

The Philosophy: Utilitarianism with a Spreadsheet

Beneath the organizational history lies a specific philosophical project. EA is applied utilitarianism: the greatest good for the greatest number, operationalized through quantitative methods borrowed from economics and public health.

EA developed three core analytical tools. Cost-effectiveness analysis compares interventions using metrics like QALYs and DALYs (disability-adjusted life years). Cause prioritization evaluates entire problem areas on scale, neglectedness, and tractability. Expected-value reasoning assigns probabilities to outcomes and multiplies them by their magnitude, allowing comparison between certain small goods (distributing bed nets) and uncertain large goods (reducing the probability of human extinction by 0.001%).

This framework has genuine strengths. It cuts through sentimentality. It demands evidence. It forces uncomfortable comparisons. The insight that some charities are hundreds of times more effective than others per dollar spent is real, and acting on it saves lives. The global health wing of EA has directed billions toward interventions with strong evidence bases, and people are alive because of it.

The problems emerge at the boundaries. Expected-value reasoning breaks down when applied to events with extremely low probabilities and extremely high stakes. If you assign even a 0.01% probability to human extinction from AI, and you multiply that by the value of all future human lives (potentially trillions over millions of years), the resulting expected value overwhelms any present-day intervention. This is Pascal’s Mugging dressed in utilitarian clothing: any sufficiently large hypothetical payoff dominates the calculation, regardless of how speculative the probability estimate is.20

The philosopher Bernard Williams identified the deeper issue decades before EA existed. Utilitarianism, Williams argued in his 1973 essay “A Critique of Utilitarianism,” alienates a person “from the source of his action in his own conviction”: from what we recognize as moral integrity.21 A moral framework that instructs you to override your instincts, your commitments, and your sense of what matters in favor of whatever the expected-value calculation produces will eventually instruct you to do things that strike most people as monstrous. The history of EA’s institutional evolution is, in part, a demonstration of Williams‘ critique: a community that started by following moral intuitions about drowning children ended by following expected-value calculations away from drowning children entirely.

The political philosopher Amia Srinivasan, reviewing the movement for the London Review of Books, identified what the utilitarian framework systematically omits: “Effective altruism doesn’t try to understand how power works, except to better align itself with it.”22 EA has extensive intellectual machinery for comparing charities. It has almost none for analyzing the structural causes of the problems it tries to solve. The framework treats poverty, disease, and existential risk as technical problems amenable to optimized interventions. It does not ask why the problems exist or examine the political and economic systems that produce them. It does not consider whether the accumulation of wealth by donors like Moskovitz and Bankman-Fried might be causally connected to the poverty that EA claims to address.

Srinivasan’s observation cuts to the bone: “capitalism, as always, produces the means of its own correction, and effective altruism is just the latest instance.”22

Critical Perspectives

The critiques of EA cluster around several axes, each illuminating a different failure mode.

The self-serving logic of longtermism. The drift from global poverty to AI safety conveniently relocated the movement’s moral priorities from the developing world (where interventions require unglamorous logistics) to the technology industry (where the problems are intellectually stimulating and the social networks overlap with the donors). The coincidence that Lewis-Kraus identified is hard to ignore.16 The longtermist framework allowed EA to redirect its resources toward problems that its own community was best positioned to work on, and to justify that redirection as a mathematical necessity rather than a social preference.

The failure to analyze power. EA treats the world as a series of optimization problems. This framing erases the political. Why do people die of preventable diseases? Not because charities are inefficient, but because of specific political and economic arrangements. Why is AI development concentrated in a handful of corporations? Not because of some natural law, but because of regulatory choices, capital flows, and power structures. EA’s philosophical apparatus is constitutionally incapable of engaging with these questions because it takes the existing distribution of power as given and asks only how to allocate resources within it. O’Sullivan’s analysis in Noema is precise: effective altruism encourages its proponents to “move into public bodies and major labs, creating a pipeline of staff who carry these priorities into decision-making roles.”14 This is power seeking dressed as altruism.

The accountability vacuum. A movement that evaluates every charity with forensic rigor applied almost no scrutiny to its own donor class. SBF was not an aberration but a structural product. The “earning to give” doctrine explicitly told young people that wealth accumulation was morally praiseworthy. The EA community provided social validation for financiers and crypto entrepreneurs who framed their wealth-seeking as altruism. When the wealthiest member of the community turned out to be a fraud, the framework had no mechanism for self-correction because it had never built one. The scrutiny was almost always directed outward, toward charities and causes. It was rarely directed inward, toward the movement’s own institutions and power dynamics.

The homogeneity of the movement. Srinivasan observed that “effective altruism has so far been a rather homogenous movement of middle-class white men fighting poverty through largely conventional means.”22 This was written in 2015, and the observation has not aged out. The EA community remains disproportionately male, white, educated at elite universities, and employed in technology and finance. The populations whose welfare EA claims to maximize are almost entirely absent from its decision-making structures. The global poor do not attend EA conferences. People in the developing world do not set EA priorities. The cause-prioritization framework is applied by a demographic that has little direct experience of the problems it claims to solve and enormous proximity to the technology industry whose interests it increasingly serves.

The galaxy-brain failure mode. Vitalik Buterin’s concept of “galaxy-brained reasoning” (reasoning chains so elaborate they can justify anything) applies directly to EA’s intellectual culture.23 The same framework that justifies donating to the Against Malaria Foundation also justifies earning billions through a crypto exchange, funding AI labs, and deprioritizing present suffering in favor of speculative future benefits. If a reasoning system can justify both bed nets and fraud with equal facility, the system has no galaxy-brain resistance. The expected-value calculation is enormously flexible. It will produce whatever conclusion its user’s assumptions generate. And the assumptions are where the politics hide.

Footnotes

  1. Singer, P. (1972). “Famine, Affluence, and Morality.” Philosophy & Public Affairs, 1(3), 229-243. (URL

  2. GiveWell. (2007-present). Charity evaluator and research organization. (URL

  3. Ord, T. (2009). Giving What We Can. (URL

  4. MacAskill, W. See EA Forum profile and Oxford appointment. (URL

  5. MacAskill, W. (2015). Doing Good Better: How Effective Altruism Can Help You Help Others, Do Work that Matters, and Make Smarter Choices about Giving Back. Avery. (URL

  6. Beckstead, N. (2013). On the Overwhelming Importance of Shaping the Far Future. PhD thesis, Rutgers University. (PDF

  7. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. (URL

  8. 80,000 Hours. Career advice and cause prioritization research. (URL

  9. Open Philanthropy. Grantmaking organization backed by Good Ventures (Dustin Moskovitz and Cari Tuna). (URL

  10. Gebru, T. and Torres, E. P. (2023). “TESCREALism.” See TESCREAL for full references. 

  11. MacAskill, W. (2022). What We Owe the Future. Basic Books. (URL

  12. Schuessler, R. (2023). “How Sam Bankman-Fried’s Crypto Empire Collapsed.” TIME. (URL 2

  13. FTX Future Fund grant data and collapse timeline. See Inside Philanthropy (2022). (URL

  14. O’Sullivan, J. (2025). “The Politics of Superintelligence.” Noema Magazine. (URL 2 3

  15. Matheny, J. Quoted in O’Sullivan (2025), see note 14. 

  16. Lewis-Kraus, G. (2022). “The Reluctant Prophet of Effective Altruism.” The New Yorker. (URL 2

  17. United States v. Bankman-Fried (2023). Conviction on seven counts of fraud and conspiracy; sentenced to 25 years. (URL

  18. Khamsi, R. (2022). “Crypto company’s collapse strands scientists.” Science. (URL

  19. Effective Altruism Forum. (2025). “Updates on the Effective Giving Ecosystem: MCF 2025 Memo.” (URL

  20. The “Pascal’s Mugging” problem in EA is discussed extensively in the academic literature on longtermism. See also: Bostrom, N. (2009). “Pascal’s Mugging.” Analysis, 69(3), 443-445. (DOI

  21. Williams, B. (1973). “A Critique of Utilitarianism.” In Smart, J.J.C. and Williams, B., Utilitarianism: For and Against. Cambridge University Press. 

  22. Srinivasan, A. (2015). “Stop the Robot Apocalypse: The New Utilitarians.” London Review of Books, 37(18). (URL 2 3

  23. Buterin, V. (2025). “Galaxy Brain Resistance.” (URL


Note Graph

ESC