Working with AI: From Adoption to Adaptation
A senior executive at a large company tells her teams to cut costs by 30% using AI. No guidance on how. No discussion of what changes. Just a number, passed down because she heard it at a conference and feels the pressure to deliver results she cannot yet articulate. Her teams do what teams do under vague mandates: they scramble, adopt tools haphazardly, automate whatever seems automatable, and report back with metrics that look impressive on slides. Six months later, the 30% has not materialized. Cognitive strain is up. Morale is down. The executive requests a new strategy.
This pattern is becoming ubiquitous. It follows a logic that worked for previous technologies: understand the tool, make a plan, implement the plan, reach a new equilibrium. The assumption underneath is that the technology will hold still long enough for the plan to work. That assumption breaks with AI. The models change quarterly. The tools change weekly (or even daily—take a chill-pill, Anthropic). The capabilities shift in ways that invalidate yesterday’s best practices. By the time an organization has defined its AI policy, the policy describes a technology that no longer exists in that form.
John Boyd, the military strategist, argued in his 1976 essay Destruction and Creation that mental models inevitably fall out of alignment with reality.1 The mismatch is inherent to dynamic environments. Boyd’s prescription: deliberately destroy outdated models and build new ones from the pieces. Not once, but continuously. His better-known OODA Loop operationalizes the same insight: the critical phase is orientation, the continuous updating of your mental model. An organization that orients faster than its environment changes stays ahead. One that locks in a model and executes against it falls behind the moment the environment shifts. Chris Argyris made a complementary point about organizations: most operate in single-loop mode, adjusting tactics within existing assumptions.2 What volatile environments demand is double-loop learning, where the assumptions themselves are questioned. The AI moment is a double-loop problem treated with single-loop tools.
What replaces the old model is not a better plan. It is a different mode of operating: double-loop learning as daily practice, built on three principles: awareness of what is actually happening beneath the surface, intentionality in how you engage, and continuity of experimentation rather than the pursuit of a new steady state.
Awareness
Most responses to AI stay on the surface. New tools appear, new anxieties circulate. The Cognitive Costs of AI maps how quickly the discourse has escalated: from Cognitive Offloading (a neutral description) to Cognitive Surrender (outright capitulation) in under two years. AI and the Expansion of Work traces how the Jevons Paradox, context-switching, brain fry, and an expanding possibility space compound into exhaustion. These are real mechanisms. But they describe the symptoms, not the drivers.
The Causal Layered Analysis reveals what sits underneath. At the worldview level: the assumption that productivity equals output, the reflex to diagnose individuals rather than structures, the expectation that technology should make things easier. At the myth level: centuries of stories about machines replacing humans, a work ethic that equates busyness with moral worth, and an identity crisis that nobody wants to name. These layers shape how organizations respond to AI far more than any capability assessment or deployment roadmap. And they are almost never discussed.
Awareness means making the invisible visible, and not just once. In a field that shifts as fast as AI does, assumptions decay in weeks. A mental model of what these tools can do that was accurate in December may be wrong by March. What awareness looks like in practice is regular, deliberate check-ins: stepping back from the tools to examine what has changed, what assumptions still hold, and what needs updating. Two questions recur: What assumptions still hold? Where are we operating on autopilot?
The hardest part of awareness is acknowledging what the AI moment actually disrupts. It is not just workflows. The myth layer of the CLA shows that AI touches identity: Who are we, if machines can do what we thought defined us? That question lives in every team that has started using these tools seriously, and in most organizations it goes unspoken.
Intentionality
AI changes faster than it can be fully understood. But waiting for understanding is not an option. This creates a paradox that sits at the heart of working with these tools. The old model assumes a sequence: first understand, then act. With AI, the sequence reverses. Understanding emerges from engagement. Analyzing these tools from the outside produces remarkably little useful insight about what they mean for your work. They reveal their capabilities, their limits, and their effects on your thinking only through sustained use. This was true of digital transformation broadly, where organizations that tried to “understand digital” before engaging with it consistently fell behind. With AI, the gap between understanding and irrelevance has compressed from years to months.
The entry point is ambiguity tolerance. Accepting that you will act before you fully understand, that your first experiments will be based on incomplete mental models, that you will get things wrong. The Buddhist prescription is simple: to begin, begin. But the paradox goes further. Ambiguity tolerance is not just the entry price. It is the condition under which understanding becomes possible at all. People who engage with these tools daily develop an intuition for what works, what fails, where the models are strong, where they hallucinate, how their own cognition responds. That intuition cannot be taught. It can only be grown through contact.
Intentionality is what prevents ambiguity tolerance from becoming chaos. Acting before you understand does not mean acting blindly. It means acting with consciousness: choosing what to try, observing what happens, adjusting deliberately. The opposite of stumbling forward, collecting hacks, half-adopting the next shiny tool, and accumulating a workflow built from accident rather than design.
Two concrete patterns illustrate this. The first is the strawman approach: asking an AI to produce a first draft, then deliberately taking it apart, reshaping it, pushing it toward your own thinking. The value lies in the friction between the machine’s output and your judgment about what it should be. That friction is where Meaningmaking happens. The second is voice input: choosing to dictate rather than type, because dictating keeps you in your own stream of thought rather than editing the machine’s. Small, intentional choices that protect the parts of thinking that matter.
Dave Snowden’s Cynefin framework names this: in complex environments, the appropriate response is probe, sense, respond.3 Call it a learning posture, if you need a name.
Continuity
The pursuit of a new equilibrium is the deepest trap. Every previous technology adoption ended with stabilization: the tool was integrated, processes were adjusted, people adapted, and a new normal emerged. AI does not offer that landing (yet). The models improve, the tools multiply, the use cases shift, and what worked last quarter may be counterproductive this quarter. Organizations that invest heavily in defining “how we use AI” and then treat that definition as settled are building on sand.
James Carse’s distinction between finite and infinite games is useful here.4 A finite game is played to win: there is an endpoint, a settled state, a conclusion. An infinite game is played to keep playing. The old adoption model is a finite game: reach equilibrium, declare victory, move on. Working with AI is an infinite one. The goal is not to solve it but to stay in the game, to keep adapting, to remain capable of surprise. “To be prepared against surprise is to be trained,” Carse writes. “To be prepared for surprise is to be educated.”
What works is permanent experimentation. That sounds exhausting, and it can be. But the alternative (pretending the ground is stable when it is not) is more exhausting, because it requires constant, invisible effort to maintain an illusion. Continuous adaptation, by contrast, can become a rhythm. Three questions structure it: What do we keep doing? What do we change? What do we stop? These are not annual strategy questions. They are monthly, sometimes weekly, check-ins that keep the organization’s engagement with AI aligned with what actually works.
The tension is real: experiment and deliver. Continuous experimentation does not mean permanent play. The measure remains whether valuable work gets done. Organizations (and individuals) that lose themselves in the novelty of AI tools, chasing every new capability, running experiments that never connect to outcomes, are drifting. The Exponential View team describes this honestly: eighteen months of experimentation, many mistakes, and then sharing what survived.5 What survived is what produced results. Everything else was learning that got composted into better judgment.
Curiosity is the engine, and most organizations are killing it. Without genuine interest in how these tools work, how they change, how they interact with your specific context, continuity becomes a mandate rather than a practice. And mandated experimentation is an oxymoron. Carse again: “Whoever must play, cannot play.” But curiosity requires conditions: time, space, permission to explore without immediate deliverables. When people are already drowning in their regular workload, when calendars are stacked and inboxes overflowing, curiosity is a luxury they cannot afford. The question for organizations is not “how do we make our people more curious about AI?” It is “what are we doing that prevents them from being curious?”
Meaningmaking as Compass
Across all three principles, the question recurs: how do you know you are on the right track? Awareness without direction is just anxiety. Intentionality without criteria is just effort. And continuity? Without a compass, it is motion that feels like progress. The compass, in the framework this series of notes has been building, is Meaningmaking.
Vaughn Tan’s concept (the capacity for subjective value judgments that AI cannot replicate, mapped in detail in the CLA note) draws a clean line through every workflow. When the results of AI-assisted work disappoint, the failure almost always traces to a point where a meaningmaking judgment was delegated to the machine. An email summary that missed what actually mattered. A research brief that covered everything except the relevant question. A strategy document that was comprehensive and empty.
Unbundling is the design principle that follows. Every workflow contains meaningmaking and non-meaningmaking components. Identifying which is which, deliberately, is the first step toward using AI well. The non-meaningmaking parts (gathering data, formatting, initial drafts, pattern matching) are where AI saves time without cost. The meaningmaking parts (deciding what question to ask, evaluating whether an output is good enough, choosing what to prioritize) are where human attention is essential. Most organizations skip the identification step. They deploy AI into a workflow and discover the meaningmaking boundaries only when something goes wrong: a client receives a brief that technically answers every question and somehow misses the point, or a team ships faster than ever while the quality of their decisions quietly degrades.
This connects back to the brain fry finding: when cognitive resources are depleted by constant oversight and context-switching, the first capacity to degrade is exactly meaningmaking. The subjective judgments that constitute quality become harder to make, not because the person lacks skill, but because the cognitive conditions for exercising that skill have been eroded. Protecting meaningmaking is a design problem.
The Hardest Part
The most common leadership response to AI is the worst one. Passing down pressure without direction. “Cut costs by 30% with AI” is a Litany-level response to a problem that lives at the Worldview and Myth levels. It treats AI as a cost-optimization tool, which is exactly the frame that produces brain fry, cognitive debt, and demoralization. Leaders who issue these mandates are not malicious. They are operating under the same pressure, the same outdated mental models, the same anxiety about falling behind. They pass the stress down because they do not know what else to do.
The most useful thing a leader can do right now is admit that. “I do not have this figured out. This is new for me too. Let us learn together.” That sentence, spoken honestly, changes more than any AI strategy document. It creates permission: to experiment, to fail, to say “this is not working,” to raise the identity-level concerns that the CLA surfaces. It moves the organization from single-loop to double-loop. And it models exactly the awareness and intentionality that this note argues for.
The concrete task is removing barriers. The overloaded calendars that leave no room for exploration, the metrics that reward only throughput, the culture that treats not-knowing as weakness. Igor Schwarzmann’s observation is relevant here: current AI tools are optimized for individual productivity.5 Good concepts for collaboration, for teams working with AI together, are still largely missing. That will change. But it will change faster for organizations that create space for collective experimentation now, rather than waiting for someone else to figure it out.
If you strip all of this down to its most practical barrier, what remains is time. The executive who mandates 30% savings has no time to reflect on what that mandate actually requires. Her teams have no time to experiment with AI in ways that might produce real improvement. The engineer experiencing brain fry has no time to step back and redesign his workflow. The leader who should be saying “let us figure this out together” has no time for the conversation. Every principle in this note (awareness, intentionality, continuity) requires time that most organizations have already allocated to something else.
The contrast agent is at work here too. Why do people publish AI-generated texts without reviewing them? Why do they ship images where the characters have six fingers? Why do they take shortcuts they would have rejected two years ago? The deepest answer involves worldview assumptions and identity. The most immediate answer is simpler: they have no time. AI was supposed to create that time. The Jevons Paradox explains why it does not: when tasks become cheaper, you do more of them, and the time that was freed gets immediately consumed.
Making time is itself the first act of intentionality. An organization that deliberately protects space for reflection and experimentation has already begun practicing what this note describes. The decision to say “these hours are not available for throughput” is a decision about what matters, which is awareness. Maintaining that decision through the next reorg and the next budget cycle is continuity. This is a precondition, not a solution. In practice, it means a leader standing in front of a team and saying “I don’t know what you’ll find, but I’m protecting your time to find it.” What that time produces depends on everything else this note describes. An organization that needs to know the ROI of reflection before allowing it has already answered the question of whether it will adapt.
Connections
The Cognitive Costs of AI maps the terminology the discourse has produced. This note asks what to do with that knowledge: not more terminology, but a different way of engaging.
AI as a Contrast Agent argues that AI reveals pre-existing problems. The recommendations here follow from that diagnosis: if the problems are structural, the responses must be structural too. Individual tips will not fix organizational conditions.
AI and the Expansion of Work traces the four mechanisms that produce cognitive strain. The continuity principle responds directly: in a world where the tools keep expanding what is possible, the capacity to choose what not to do becomes as important as the capacity to do more.
AI and Knowledge Work - A Layered Analysis provides the analytical foundation. The recommendations in this note are the reconstruction that the CLA invites: what would it look like to work differently at each layer?
Meaningmaking is the compass that runs through all three principles. It names the capacity that AI cannot replicate and that the current mode of adoption is degrading.
Open Questions
If continuous adaptation is the new mode, how do organizations distinguish between productive instability (we are learning and evolving) and unproductive instability (we are thrashing without direction)? There is a version of permanent experimentation that produces actual progress and a version that produces permanent disorientation. The line between them may be meaningmaking: are the experiments guided by subjective judgments about what matters, or are they reactions to the latest capability announcement?
And a question about collaboration: if good concepts for teams working with AI together are still missing, who will develop them? The organizations that experiment collectively will generate the patterns. But most AI discourse still addresses the individual user. What would it take for “how we work with AI as a team” to become as developed as “how I use AI in my workflow”?
-
John Boyd, “Destruction and Creation,” unpublished essay, 1976. Boyd’s core argument: mental models are finite representations of an infinite reality, and their inevitable mismatch requires deliberate destruction and reconstruction. ↩
-
Chris Argyris, “Double Loop Learning in Organizations,” Harvard Business Review, September 1977. Single-loop learning adjusts actions within existing assumptions. Double-loop learning questions the assumptions themselves. ↩
-
Dave Snowden, “A Leader’s Framework for Decision Making,” Harvard Business Review, November 2007. The Cynefin framework distinguishes between simple, complicated, complex, and chaotic domains, each requiring different response strategies. ↩
-
James P. Carse, Finite and Infinite Games: A Vision of Life as Play and Possibility, 1986. ↩
-
Igor Schwarzmann, “Don’t hate the player, hate the game,” THE NEW, January 2025. ↩ ↩2