Center for Humane Technology

This is a primer on the Center for Humane Technology and its co-founder Tristan Harris.


The Center for Humane Technology began as a critique of the attention economy and ended as the most influential popularizer of artificial-intelligence doom. The organization that taught a generation of parents to distrust their children’s phones now argues in Washington that open-source AI models pose existential risk. And it does so using the same techniques of emotional overwhelm and personified-technology alarm that its co-founder once described as the signature pathology of Silicon Valley.

Founded in 2018 by Tristan Harris, a former Google product manager, alongside Aza Raskin and other former Silicon Valley insiders, the CHT reached mass audiences through the Netflix documentary The Social Dilemma (2020) and pivoted in 2023 to campaign against the existential risks of AI. Its public success rests on a specific move: it translates the idiom of Silicon Valley self-critique into policy-relevant alarm, while leaving the structural arrangements that produce the problems it names largely untouched.

Founding and Early Framing

Tristan Harris began as a product manager at Google, working on Gmail and Google Inbox. In February 2013, he circulated an internal slide deck titled A Call to Minimize Distraction and Respect Users‘ Attention, which traveled widely inside the company.1 The “design ethicist” title Harris later became known for was an informal designation that stuck from his remaining years at Google until his departure in late 2015. He then fronted the short-lived Time Well Spent initiative and co-founded the CHT in 2018 with Aza Raskin (inventor of the infinite scroll, a design pattern he has since described as one of his regrets) and a small group of former Silicon Valley figures.

The organization’s initial framing was the attention economy: the thesis that a handful of platforms were using persuasive design techniques originally developed in behavioral psychology and slot-machine engineering to extract ever more of users’ time, at growing cost to mental health, democratic discourse, and cognitive autonomy.

What makes the CHT unusual is the narrowness of its actual activities. It produces the podcast Your Undivided Attention, sends its principals to TED stages, testifies at congressional hearings, publishes op-eds in major outlets, and runs a policy-oriented Substack endorsing specific legislative proposals. It does not litigate. It does not organize affected communities. It does not work with platform workers, content moderators, or the staff whose labor builds the systems it critiques. It does not pursue antitrust strategies. The target audience is policymakers, journalists, and the tech-literate public, and the currency is narrative reframing.

The Social Dilemma (2020)

In September 2020, the Netflix documentary The Social Dilemma, directed by Jeff Orlowski, brought the CHT’s framing to a global audience. The film intercut interviews with former Silicon Valley insiders, Harris chief among them, with a dramatized subplot in which a teenager is algorithmically radicalized. It reached a mass global audience, was widely cited as among Netflix’s most-watched documentaries of 2020, and quickly entered the canon of tech criticism shown in high-school media-literacy classes.2

Techdirt founder Mike Masnick, writing the month of the film’s release, argued that the documentary used exactly the manipulation techniques it claimed to expose, framing the film as “emotionally engaging misinformation designed to impact our beliefs and actions.”2 The documentary’s dramaturgy set a template the CHT would later reuse: personify the technology as an active adversary, treat causality between platform design and social pathology as established fact, and offer individual behavioral responses (disable notifications, delete apps, watch the film with your children) in place of political ones. The Social Dilemma did not ask who profits, who regulates, or who decides. It asked viewers what they would do differently on their own phones.

The 2023 Pivot to AI Doomerism

From 2023 onward, the CHT executed the same media pivot it had made in 2018 with social media, this time with AI as the subject. In March 2023, Harris and Raskin debuted The AI Dilemma, a talk structured to reproduce The Social Dilemma‘s dramaturgical pattern: technology personified as an active villain, causality asserted as fact, and the punchline an extinction-level event.3

A specific criticism concerns Harris’s repeated citation of the claim that “half of all AI researchers” consider extinction risks realistic. The underlying AI Impacts survey drew 738 responses at a 17% response rate from ML researchers, published by an Effective Altruism-adjacent organization; by Weiss-Blatt’s own tally of the anonymized dataset, only 81 respondents actually placed the probability of human extinction from AI at ten percent or higher. Yet the headline figure is consistently presented as the consensus of top AI scientists.3

A further credibility blow came at a closed-door Senate AI forum in September 2023. Harris told assembled senators and tech executives that Meta’s Llama 2 model had provided his engineers with detailed instructions for synthesizing anthrax as a bioweapon. Mark Zuckerberg responded that anyone looking for such a guide could find it on YouTube or via a simple Google search, a retort that reportedly drew laughter from the room.4

The AI Doc (2026)

In March 2026, the theatrical documentary The AI Doc: Or How I Became an Apocaloptimist, directed by Daniel Roher and Charlie Tyrell, staged the CHT’s AI-doom case for a mass audience. Harris is the film’s central mediating figure, presented as a moderate between AI-extinction advocates on one side and accelerationist evangelists on the other. The cast assembles nearly every visible name in the AI-doom orbit: Eliezer Yudkowsky, Dan Hendrycks, Daniel Kokotajlo, Connor Leahy, and Jeffrey Ladish on the X-risk side; Sam Altman, Demis Hassabis, Dario and Daniela Amodei as lab-CEO voices; Peter Diamandis and Guillaume Verdon as accelerationists; and Emily M. Bender, Timnit Gebru, Deborah Raji, and Karen Hao as AI-ethics researchers.

The film has been dissected at length by tech-communications researcher Dr. Nirit Weiss-Blatt in Techdirt and functions, in her reading, as advocacy with documentary aesthetics rather than journalism.5 Four features carry the argument.

Factual errors presented as revelation. A centerpiece scene stages Anthropic’s “blackmail” experiment. A Claude model is shown engaging in coercive behavior, and when Roher asks whether anyone taught it to do that, Ladish replies “No, it learned to do that on its own.” Anthropic’s own write-up tells a different story: researchers iterated through hundreds of prompts and constrained the scenario aggressively before reaching that output. The apparent emergence is an artifact of the setup. Leahy’s on-camera claim that “there is more regulation on selling a sandwich than on developing AI” is refuted by the substantial body of antitrust, civil-rights, consumer-protection, and data-privacy law already applied to AI systems, as state attorneys general and former FTC chair Lina Khan have publicly noted. Karen Hao’s presentation of catastrophic data-center water consumption rests on source-material errors that critics including Weiss-Blatt and Andy Masley have calculated inflate the claim by roughly a factor of 4,500 relative to regional water data, and omits that Maricopa County data-center water use is a small fraction of local consumption, orders of magnitude below local golf-course irrigation.5

False balance. Yudkowsky, an autodidact best known in rationalist circles as the author of a widely read Harry Potter fan-fiction, is given on-screen standing equivalent to the accredited AI researchers in the ensemble. Leahy appears without any mention of his public advocacy for an international ban on advanced AI systems and an immediate development pause, positions that would mark him, outside this film, as a fringe voice.5

Strategic omissions about the film’s own protagonist. Harris appears in a long emotional sequence about Roher’s father’s cancer diagnosis, sharing the director’s distress. Unmentioned: Harris’s own on-record position that he would reject a cancer-curing AI if he believed it carried any non-trivial probability of human extinction within a year. Asked on Glenn Beck’s program in 2023 whether he would accept AI as a cancer treatment for his mother if human extinction followed twelve months later, he said he would not.5

Writer David William Silva, analyzing the same film, reads its persuasion architecture as more consequential than its factual errors. He names three techniques that bypass rational evaluation: the specificity heuristic, strategic concession, and impression management. Together they run an emotional arc from terror to relief that leaves viewers, in his phrase, “shaken, confused, lost, but somehow grateful.” Fear, Silva notes, is “insanely lucrative. It fabricates urgency, reduces bureaucracy, unlocks unvetted and rushed deployment of public resources.”6

The template from The Social Dilemma returns intact. A “doom parade” of apocalyptic predictions opens the film, and Roher’s impending fatherhood anchors the emotional register where a radicalized teenager once did. The closing frame is not a fade to credits but a QR code directing viewers to CHT activism infrastructure: the documentary that frames itself as analytical investigation ends as an onboarding funnel for the organization whose co-founder serves as its narrator-protagonist.

Connections to TESCREAL Networks

To understand why Harris’s alarm-rhetoric travels so well, it helps to locate him in a network he himself does not claim as his own. Harris functions as the movement’s popular translator. He takes positions articulated more radically by the Rationalist community and longtermist philosophers and brings them into mainstream media and legislative debates. Yudkowsky argues in Time that unaligned AGI justifies, if necessary, airstrikes on rogue data centers; Harris takes the same underlying frame (that AI poses civilizational-scale risk) and translates it into endorsements of congressional bills like the AI LEAD Act.7 The radical version loses politically. The translated version shapes policy.

The AI Doc cast surveyed above is the clearest public instantiation of this network, with the X-risk figures spanning a range of Rationalist-adjacent positions and Harris mediating between them and mainstream audiences. He does not identify as an Effective Altruist. He has said publicly that he attended early EA conferences and found their AI-risk fixation off-putting. But he works within the same conceptual frames and has been interviewed by the EA career network 80,000 Hours.7

The translation-function is what makes Harris structurally consequential. He is the bottleneck through which doomer ideas enter policy discourse without the ideological baggage becoming visible. A senator reading Yudkowsky’s Time essay meets a fringe. A senator meeting Harris in a closed-door AI forum meets a sympathetic former tech executive warning about real risks. The argument arrives stripped of its TESCREAL provenance.

Critical Perspectives

The CHT has been criticized from multiple directions. The critiques cluster around three interlocking mechanisms.

The permissible critique. Ben Tarnoff and Moira Weigel, writing in The Guardian, have argued that the CHT offers exactly the kind of critique Silicon Valley can welcome, because it demands no fundamental reforms. The organization sets the boundaries of discourse so that “Facebook extracts too much attention” is sayable, but “the world would be better off without Facebook” is not.8 The same move recurs in two related gestures. The CHT reduces political problems to individual behavioral adjustments (grayscale display, disabling notifications, mindfulness apps) rather than addressing systemic causes. Citizens do not appear in this framing, only consumers. And the organization treats technology as the primary driver of social problems (polarization, democratic erosion, mental-health crises) without examining the economic, historical, and political structures that make those problems possible in the first place. The three gestures (reform-not-structure, individualization, and technological determinism) are variations on a single maneuver: name the pathology, keep the system.

What the frame excludes. The CHT and The Social Dilemma systematically avoid questions of inequality and marginalization. The core problem with algorithmic systems, that they amplify existing inequalities and disproportionately harm marginalized communities, remains invisible in CHT materials. Scholars like Safiya Noble and Siva Vaidhyanathan, who have worked on algorithmic harm and structural racism in platforms for years without comparable media platforms, are largely absent from the CHT’s universe.9 The framework’s attention to emotional-manipulation mechanics has no room for the more basic political question: who gets hurt when the systems work as designed.

The prodigal tech-bro pipeline. Irish tech writer Maria Farrell coined the term prodigal tech bro for the phenomenon: former Silicon Valley insiders who publicly “convert” receive disproportionate attention and credibility, at the expense of activists, lawyers, and critical scholars who have been doing this work for years without the platform.10 The dynamic is not accidental. A tech industry that wants to be seen as capable of self-critique benefits from elevating its own former participants over external critics. The insider’s repentance sells. The outsider’s analysis does not.

Assessment

The CHT is not a marginal phenomenon. It actively shapes public discourse on technology regulation in the US and beyond. The critique of Harris is not that his topics are irrelevant (manipulation mechanics, race-to-the-bottom dynamics among AI labs, and democratic risks are real problems), but that he deploys the same techniques of attention capture and emotional overwhelm he publicly condemns, while systematically setting aside structural questions of power, capital, and social inequality.

Harris’s sincerity is not the interesting question. He gives every indication of being sincere. The interesting question is what his prominence reveals about the political economy of tech criticism. The voices with the platform are the voices the platforms tolerate. The TESCREAL orbit and the “prodigal tech bro” pipeline produce a permissible critique: one that keeps the technology, the industry, and governance in the same hands, and asks only for better intentions.

Footnotes

  1. Bosker, B. (2016). “The Binge Breaker.” The Atlantic. (URL

  2. Orlowski, J. (dir.). (2020). The Social Dilemma. Exposure Labs / Netflix; Masnick, M. (2020). “The Social Dilemma Manipulates You With Misinformation As It Tries To Warn You Of Manipulation By Misinformation.” Techdirt, September 29, 2020. (URL 2

  3. Weiss-Blatt, N. (2023). “Like The Social Dilemma Did, The AI Dilemma Seeks To Mislead You With Misinformation.” Techdirt, April 26, 2023. (URL 2

  4. Business Insider. (September 2023). “Mark Zuckerberg confronted at a senators’ AI forum about Meta’s Llama 2 AI model after Tristan Harris alleged it provided bioweapon instructions.” (URL

  5. Weiss-Blatt, N. (2026). “The AI Doc’s Falsehoods And False Balance.” Techdirt, April 2, 2026. (URL); Roher, D. and Tyrell, C. (dirs.). (2026). The AI Doc: Or How I Became an Apocaloptimist. Theatrical release, March 27, 2026.  2 3 4

  6. Silva, D. W. (2026). “Hollywood Just Packaged AI Anxiety.” Substack. (URL

  7. Yudkowsky, E. (2023). “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down.” Time. (URL); Center for Humane Technology Substack. (2025). “Ask Us Anything 2025.”; Harris, T. interview on the 80,000 Hours Podcast. (URL 2

  8. Tarnoff, B. and Weigel, M. (2018). “Why Silicon Valley Can’t Fix Itself.” The Guardian. (URL); LibrarianShipwreck. (2018). “Be Wary of Silicon Valley’s Guilty Conscience.” (URL); Oxford Insights. (2020). “The Social Dilemma: A Failed Attempt to Land a Punch on Big Tech.” (URL

  9. Evolvi, G. (2020). “The Social Dilemma: A Short Guide to Criticize It.”; Portside. (2020). “The Social Dilemma Fails to Tackle the Real Issues in Tech.” (URL

  10. Farrell, M. (2020). “The Prodigal Tech Bro.” Conversationalist. (URL


Note Graph

ESC