Collingridge Dilemma
This is a primer on the Collingridge Dilemma.
Table of Contents
- What is the Collingridge Dilemma?
- Origins and Context
- The Two Horns of the Dilemma
- Contemporary Examples
- Relationship to the Pacing Problem
- Proposed Solutions
- Critical Perspectives
What is the Collingridge Dilemma?
The Collingridge Dilemma describes a fundamental timing problem in governing emerging technologies: when change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult, and time-consuming.1
This creates a double-bind. Early in a technology’s development, regulation would be straightforward because no economic structures have solidified, no lobby groups have formed, and no careers depend on the status quo. But at this stage, we lack information about the technology’s eventual impacts. We don’t know which harms will materialize or which benefits will emerge. Regulation feels speculative, potentially innovation-hostile.
Later, once negative consequences become obvious, the technology is deeply embedded. It powers critical infrastructure. It employs thousands. It generates revenue. Entire industries depend on it. Change is now politically and economically nearly impossible to enforce.
The dilemma is not just theoretical. It structures nearly every major technology policy debate of our time.
The concept crystallizes a core tension in Foresight consulting: organizations know they should act early, but early action requires conviction about futures that haven’t yet arrived. Wait for certainty, and you’ve already lost.
Origins and Context
David Collingridge, a British academic working in Science and Technology Studies, formulated the dilemma in his 1980 book The Social Control of Technology.2 The book emerged from a specific historical context: the 1970s and early 1980s saw rapid technological change in nuclear power, biotechnology, and computing. These developments outpaced society’s ability to understand and respond to consequences.
Collingridge observed that attempts to control technology systematically failed. Early interventions (like the UK’s hasty 1990 ban on human cloning, before the technology even existed) looked foolish in retrospect. Late interventions (like efforts to regulate automobile emissions after car culture was entrenched) faced massive resistance and limited effectiveness.
The dilemma wasn’t unique to any one technology. It was structural, recurring across domains. Collingridge identified two interrelated problems:
-
The Information Problem: Early in development, we don’t know enough about a technology’s impacts to regulate intelligently. Scientific knowledge is incomplete. Social effects are uncertain. Economic consequences are unpredictable.
-
The Power Problem: Later, when impacts are clear, the technology has created economic, political, and social constituencies that resist change. What economists call “lock-in” has occurred. Reversing course becomes prohibitively expensive.
Collingridge’s proposed solution was “Intelligent Trial and Error”: design technologies to be flexible and correctable, make decisions decentrally rather than through grand plans, implement changes incrementally with built-in feedback loops.3
This solution was methodologically influential. It foreshadowed contemporary concepts like [[ Anticipatory Governance ]], adaptive regulation, and regulatory sandboxes. But Collingridge was realistic: the dilemma couldn’t be “solved,” only managed with greater sophistication.
The book predated the internet, social media, and AI, yet it reads as prophetic. Every major technology policy debate of the 21st century, from data privacy to platform content moderation to artificial intelligence, plays out within the parameters Collingridge identified.
The Two Horns of the Dilemma
The dilemma’s power lies in its symmetry. You can’t escape by moving in either direction.
The Information Problem (Early Intervention)
Regulate too early and you risk:
-
Stifling beneficial innovation: Technologies often develop in unexpected directions. Early regulation may foreclose valuable uses nobody anticipated. Consider how restrictive data protection rules in the 1990s might have prevented beneficial health research or personalized services.
-
Solving the wrong problem: Without real-world deployment, it’s hard to know which harms will actually materialize. Regulators may fixate on hypothetical risks while missing actual ones. Early gene therapy regulations, for instance, focused on hypothetical risks that never materialized while missing issues that did emerge in practice.
-
Creating perverse outcomes: Ill-informed rules can make things worse. Restrictive licensing that entrenches incumbents. Safety standards that are technically infeasible. Compliance costs that benefit large firms at the expense of startups.
The information problem is epistemological. We genuinely don’t know. Not because we’re lazy or stupid, but because technologies co-evolve with social practices, business models, and cultural meanings that emerge only through deployment.
The Power Problem (Late Intervention)
Regulate too late and you face:
-
Economic entrenchment: Entire industries depend on the technology. Jobs, pensions, regional economies. Shutting down or significantly constraining a mature technology imposes massive transition costs. Think of fossil fuels: everyone agrees we must transition away, but the economic lock-in makes change glacially slow.
-
Political resistance: Powerful lobby groups form to defend the status quo. They fund research questioning harms, contribute to political campaigns, threaten to relocate. Platform companies resisting content moderation rules. Crypto exchanges fighting financial oversight. Automakers delaying emissions standards for decades.
-
Technical irreversibility: Some technologies create dependencies that are hard to undo. Infrastructure gets built around them. Skills develop. Supply chains specialize. Data accumulates in proprietary formats. Migrating away becomes technically complex and expensive.
The power problem is political-economic. It’s not about lack of will; it’s about the asymmetry between concentrated benefits (to technology producers) and diffused costs (to society).
Contemporary Examples
The Collingridge Dilemma isn’t a historical curiosity. It structures our current technology governance crises.
Artificial Intelligence
In 2010, when machine learning began its modern resurgence, comprehensive AI regulation would have been straightforward. No major economic interests depended on the technology. Research was primarily academic. Deployment was limited.
But in 2010, we couldn’t foresee the specific harms. Algorithmic bias in hiring? Deepfakes undermining trust? Large language models generating misinformation at scale? These emerged only through deployment.
By 2023, when the EU AI Act finally passed, foundation models were already embedded across industries. OpenAI, Google, Microsoft, and Anthropic had attracted billions in investment. Thousands were employed in the sector. Critical business processes depended on AI systems. The Act had to accommodate existing deployments, carve out exceptions, allow transition periods.
We’re in the classic dilemma: early regulation would have felt premature; late regulation faces entrenched interests and must retrofit rules onto existing infrastructure.
Social Media Platforms
Facebook launched in 2004. Twitter in 2006. In those early years, their eventual harmful effects (echo chambers, misinformation, mental health impacts, democratic interference) were impossible to anticipate.
Early regulation would have been possible. These were small companies, easily influenced by law. But what should regulators have done? Ban algorithmic feeds? Require content moderation? Limit data collection? None of these harms were obvious yet.
By 2016, after Brexit and the U.S. election, platform harms were undeniable. But now these were trillion-dollar companies, employing tens of thousands, with sophisticated lobbying operations. The EU’s Digital Services Act (2022) and Digital Markets Act (2023) arrived nearly two decades after the platforms’ founding.
The lock-in is profound. Billions of users have social graphs stored on these platforms. Content creators depend on them for income. Media organizations rely on them for distribution. Unwinding this would be economically and socially disruptive even if politically feasible.
Cryptocurrency and Blockchain
Cryptocurrency emerged with Bitcoin in 2009. Early regulation would have been trivial: shut down exchanges, prohibit banks from transacting with crypto firms, classify tokens as securities.
But in 2009-2012, crypto seemed like a niche experiment. Regulators had no framework for assessing harm. Was this digital gold? A payment system? A speculative bubble? The technology’s implications were genuinely unclear.
By 2021-2022, after the FTX collapse and billions in losses, regulatory urgency was obvious. But now crypto was a multi-trillion dollar asset class. Powerful lobbies formed (the Blockchain Association, Coinbase’s advocacy arm). Retail investors held stakes. Regulatory capture attempts intensified (the revolving door between SEC and crypto firms).
The pattern repeats: early action felt unjustified; late action faces massive resistance.
Relationship to the Pacing Problem
The Collingridge Dilemma is intimately related to, and sometimes treated as synonymous with, the [[ Pacing Problem ]].
The Pacing Problem, popularized by Larry Downes in 2009, observes that “technology changes exponentially, but social, economic, and legal systems change incrementally.”4
This creates a growing gap. Moore’s Law drives computing power forward at exponential rates. But legislation moves at the pace of deliberation, committee hearings, and parliamentary cycles. The change is linear, incremental. The gap widens.
Lyria Bennett Moses, an Australian legal scholar, developed the concept further in her work Recurring Dilemmas (2007). She identified four problem types:5
- Need for new laws for new technologies
- Uncertainty in applying existing rules to new practices
- Over- or under-inclusiveness of existing norms
- Apparent obsolescence of legal frameworks
Moses emphasized that “technology-neutral” drafting alone doesn’t solve the problem. Laws written too abstractly become meaningless; laws written too specifically become outdated.
The Pacing Problem and Collingridge Dilemma are two lenses on the same phenomenon:
- Pacing Problem: Diagnoses the velocity mismatch (exponential tech vs. incremental law)
- Collingridge Dilemma: Diagnoses the timing paradox (information vs. power)
Both conclude that reactive law-making is structurally inadequate. Both drive the case for Legal Foresight, [[ Anticipatory Governance ]], and adaptive regulation.
The Legal Foresight community frames the Collingridge Dilemma as the why behind proactive legal strategy. Organizations cannot wait for regulatory clarity; clarity arrives after entrenchment. They must anticipate regulatory responses while technologies are still malleable.
Proposed Solutions
Collingridge’s dilemma can’t be “solved,” but it can be managed. Several strategies have emerged:
1. Regulatory Sandboxes
Regulatory sandboxes create controlled environments where firms can test innovations under relaxed rules while regulators observe and learn.6
The UK’s Financial Conduct Authority pioneered this in 2015 for fintech. Companies get temporary authorization to operate with modified regulations. Regulators collect data. If innovations prove safe, rules adapt. If harmful, the sandbox prevents widespread damage.
Sandboxes embody Collingridge’s “Intelligent Trial and Error”: small-scale experimentation, fast feedback loops, iterative learning.
Limitations: Sandboxes favor firms with resources to participate. They risk regulatory capture (cozy relationships between sandbox participants and regulators). They can legitimize technologies that should be rejected outright.
2. Anticipatory Governance
[[ Anticipatory Governance ]], developed by David Guston and colleagues around 2008, combines foresight methods, stakeholder engagement, and adaptive institutions.7
The approach:
- Use [[ Scenario Analysis ]], horizon scanning, and expert consultation to surface potential futures
- Engage diverse stakeholders (technologists, affected communities, ethicists, policymakers)
- Build feedback mechanisms into regulation from the start
- Accept uncertainty and design for revision
The EU’s approach to AI governance reflects anticipatory governance principles: multi-year consultation, the High-Level Expert Group on AI, ethics guidelines preceding binding law.
Limitations: Time-consuming. Requires political will to engage in abstract futures planning. Risks becoming a fig leaf for inaction (“we’re studying it”).
3. Adaptive and Experimental Regulation
Adaptive regulation builds revision into law:
- Sunset clauses: Rules expire unless renewed, forcing periodic review
- Regulatory review cycles: Mandated reassessment every X years
- Performance-based standards: Define outcomes rather than specific methods
- Safe harbor provisions: Protect firms following best practices, even if rules change
Singapore’s approach to autonomous vehicles exemplifies this: special zones with adapted traffic rules, mandatory reporting, rules that evolve based on incident data.
Limitations: Creates regulatory uncertainty. Firms struggle to make long-term investments when rules may change. Legal stability has value; too much fluidity creates chaos.
4. Flexible Institutional Design
Collingridge emphasized decentralized decision-making and modular technology design. If technologies are built with correctable architecture (open standards, interoperability, reversible commitments), lock-in is reduced.
Some scholars extend this to institutional design: regulatory agencies with in-house foresight units, parliamentary futures committees (like Finland’s Committee for the Future), cross-sector coordination bodies.
Limitations: Institutional reform is slow. Precisely the kind of change the Pacing Problem says we’re bad at.
5. Principle-Based and Technology-Neutral Regulation
Rather than regulate specific technologies, regulate principles that apply across domains:
- Data protection regimes (GDPR) apply regardless of whether data is processed by AI, traditional databases, or future technologies
- Competition law addresses market power generally, not just specific platforms
- Safety and liability frameworks focus on harms, not implementation details
This approach ages better than technology-specific rules.
Limitations: Principles can be vague, making compliance uncertain. They require judicial interpretation, which takes years. They may not address technology-specific risks adequately.
Critical Perspectives
The Collingridge Dilemma is influential, but not uncontroversial.
Critique 1: Rhetorical Weapon for Inaction
The dilemma can be weaponized. Tech industry lobbyists invoke it to argue against regulation: “Too early to know the harms!” When harms emerge, they switch arguments: “Too late to regulate without destroying innovation!”
This is rhetorical abuse, not a critique of Collingridge himself. But it’s a real phenomenon. Policymakers must be alert to bad-faith invocations of uncertainty.
Critique 2: Many Harms Are Foreseeable
Critics argue the dilemma overstates our ignorance. Many technology harms recur across contexts:8
- Privacy violations (data collection always risks abuse)
- Market concentration (network effects drive winner-take-all dynamics)
- Discrimination (algorithms trained on biased data perpetuate bias)
- Labor displacement (automation displaces workers, always)
If harms are foreseeable, robust horizontal regulation can address them without perfect foresight about specific technologies. We don’t need to know exactly how AI will develop to know data protection, anti-discrimination law, and labor protections will be necessary.
This critique suggests the dilemma is partly a failure of institutional memory, repeatedly treating recurring problems as novel.
Critique 3: Overemphasis on Technocratic Solutions
Some scholars argue the dilemma frames technology governance as a technical problem (information, timing) rather than a political one (power, interests).
The real barrier isn’t uncertainty; it’s that powerful actors profit from harmful technologies and resist regulation. Focusing on sandboxes and adaptive regulation may distract from the need for political mobilization, antitrust enforcement, and challenging corporate power directly.
From this perspective, the dilemma naturalizes what should be contested: the assumption that technology development proceeds on its own trajectory, which law must accommodate. An alternative view: law should shape technology development democratically from the start, not react to privately determined trajectories.
Critique 4: Cultural and Geographic Bias
The Collingridge Dilemma emerges from Western liberal democratic contexts. Other governance traditions may not experience the same double-bind.
China’s approach to AI governance, for instance, involves early state direction, close coordination between tech firms and government, and rapid iteration based on deployment data. This combines elements of both early and late intervention simultaneously.
African scholars working on AI governance emphasize building on existing local regulatory structures rather than importing Western “future-proof” frameworks wholesale.9 They argue external consultants and global tech firms shouldn’t dominate the agenda.
The dilemma may be partly a product of specific institutional arrangements (separation of powers, judicial review, lobbying culture) that create timing rigidities.
-
Collingridge, David (1980). The Social Control of Technology. St. Martin’s Press. (Internet Archive) The exact formulation: “Attempts to control a technology tend to have one of two outcomes. Either they come too late, when the technology is already so well established that control is difficult, or they come too early, when the technology is still poorly understood and consequences are hard to predict.” ↩
-
Collingridge’s work built on earlier STS scholarship examining technology’s social shaping, but he formalized the timing paradox in a way that proved remarkably durable. The book is foundational for Science and Technology Studies and remains cited in contemporary AI governance debates. ↩
-
Collingridge, David (1980). The Social Control of Technology, Chapter 7: “Intelligent Trial and Error.” This approach emphasized: (1) make decisions close to where consequences are felt, (2) keep options open through modular design, (3) build in feedback mechanisms, (4) avoid premature commitment to fixed trajectories. ↩
-
Downes, Larry (2009). The Laws of Disruption: Harnessing the New Forces That Govern Life and Business in the Digital Age. (Penguin Random House) The Pacing Problem became a rallying cry in tech policy circles, often used to argue for regulatory restraint, though Downes himself emphasized the need for smarter, more adaptive regulation. ↩
-
Moses, Lyria Bennett (2007). “Recurring Dilemmas: The Law’s Race to Keep Up with Technological Change.” University of Illinois Journal of Law, Technology & Policy, 2007(2). (SSRN) Moses showed that the dilemma isn’t new; it recurs across eras and technologies. Technology-neutral drafting is necessary but insufficient. ↩
-
The regulatory sandbox concept originated in the UK Financial Conduct Authority’s 2015 innovation program. (FCA Regulatory Sandbox) By 2024, over 50 jurisdictions globally had implemented some form of regulatory sandbox, particularly in fintech, healthtech, and mobility. The EU AI Act (2024) mandates AI regulatory sandboxes by 2028. ↩
-
Guston, David H. (2014). “Understanding ‘Anticipatory Governance’.” Social Studies of Science, 44(2), 218–242. (DOI) Guston developed the concept in the context of nanotechnology governance, arguing that foresight, public engagement, and institutional reflexivity should be built into research and innovation from the start. ↩
-
Guihot, Michael, Matthew, Anne F., and Suzor, Nicolas P. (2017). “Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence.” Vanderbilt Journal of Entertainment & Technology Law, 20(2). (PDF) The authors argue against “AI exceptionalism,” the assumption that AI is so unique that existing regulatory frameworks don’t apply. They emphasize learning from institutional experience with prior technologies. ↩
-
Gwagwa, Arthur, Kazim, Emre, Hilliard, Abeba, Siminyu, Kathleen, and Smith, Matthew (2022). “Responsible Artificial Intelligence in Africa: Challenges and Opportunities.” In Patterns of Commoning. (Research4Life) The authors emphasize that African contexts have existing governance structures and legal traditions that should shape AI regulation, rather than uncritically adopting Western frameworks. ↩