The Forward Deployed Foresight Strategist

Forward Deployed Engineers are having a moment. The role, pioneered by Palantir, has seen job postings increase by 800-1,000% in 2025. Salesforce is building a thousand-person FDE team. Stripe, OpenAI, and Anthropic are all hiring for variations of the same idea: engineers who don’t build from headquarters but embed directly with clients, living inside their problems until they understand them better than the people who created them.1

Ethan Mollick recently questioned whether FDEs will deliver on what companies are hoping for. His argument: AI adoption “is ultimately far less of a technical issue and much more about rethinking the deep expertise & structure of your organization around AI.” His closing observation: consultants and FDEs “really have no established playbooks to give firms, no years of data to draw on, no clear views of the future.”

That last part caught my attention. No clear views of the future. That’s the gap.

Nobody, as far as I can tell, has asked what happens when you apply the FDE principle to foresight — though I’ve explored the underlying concept of the practitioner role itself in The Forward Deployed Futurist, and the practical tooling side in Foresight with Claude Code.

The Foresight Delivery Problem

I’ve written about this before (in The Beginning and the End of Foresight and Wie Foresight- und Innovations-Teams wirksam werden): foresight work has an impact problem. The scenarios get built, the report gets delivered, everyone’s happy. And then nothing happens. Insights gather dust while the organisation moves on to next quarter’s priorities.

The common diagnosis blames methodology: if only we had sharper signals, more rigorous analysis. I think the diagnosis is wrong. The methods are fine. The delivery model is broken.

Most foresight firms take a brief, apply their standard toolkit, and throw the results back over the fence. The engagement ends where the actual work should begin: at the point where futures thinking needs to become organisational practice. This is a structural problem, not a skills problem. And it turns out someone in a completely different industry figured out an answer.

What Palantir Figured Out

Palantir’s insight2 (and I’m borrowing liberally from Zoe Scaman’s excellent The Palantir Model here) was that the real product is embedded cognitive capacity. Their Forward Deployed Engineers move into client organisations for months, sometimes years. They don’t trust briefs. They don’t trust stated requirements. They assume the organisation is wrong about what’s actually broken.

And they’re usually right.

Scaman puts it well: the difference between “tell me your problem” and “let me watch you work for six months” is the difference between stated and revealed preferences. Any behavioural economist will tell you those are very different things.

There’s a second piece that matters: Palantir splits its engineers into two groups. FDEs build bespoke, fast, whatever-works solutions on site. Product Development engineers then extract the patterns and build reusable infrastructure. Every engagement makes the whole system smarter. This is how knowledge compounds instead of staying locked in someone’s head.

Familiar Territory?

Two existing roles cover adjacent ground. “Applied Futurist” is a positioning label: Noah Raford and Tom Cheesewright use it to signal practice over academia. It says something about attitude but nothing about how the work is delivered.

“Futurist in Residence” is closer. IDEO, Nokia, Stanford’s d.school have all used versions of it. But residencies typically start from a stated problem (“Help us think about the future of X”) rather than the FDE assumption that the organisation is wrong about what’s broken. And they’re standalone: when the foresight strategist leaves, whatever they built stays with that one client. No pattern extraction, no flywheel.

Whether these differences justify a new term is open. I’m less interested in the label than in what happens when you take the architecture seriously.

Forward Deployed Foresight

What would foresight look like if it followed this logic?

Start with the context problem. External foresight practitioners rarely understand an organisation deeply enough to produce relevant futures. They capture what the brief says, not what the organisation actually needs. The real strategic questions (the ones that would make a foresight project useful) hide in the gap between what people tell you in a stakeholder interview and how they actually make decisions. You can’t access that gap in a two-day workshop.

Then there’s the translation problem. Even good foresight work stays abstract without someone who bridges both worlds: the futures thinking and the organisational reality. That bridge requires presence. It requires understanding who holds power, whose budget is threatened, which team will block implementation and why. Almost none of this shows up in a deliverable.

The most effective foresight engagements I’ve been part of hinted at this. Scenario work, when you read between the lines, is good at surfacing the real problems in an organisation. People project their present pain into the future: their scenarios either solve problems nobody officially acknowledges or amplify them beyond proportion. You start seeing where stated strategy and actual anxiety diverge. But in a standard engagement, that’s where it ends. You deliver the report, add some recommendations, and leave. You have no mandate to work on what the scenarios actually revealed. You waste the best diagnostic tool foresight has, because the delivery model cuts diagnosis off from implementation.

The Encoding Layer

Here’s where it gets interesting for solo practitioners. Palantir can afford to embed engineers because they have a second layer extracting and systematising the patterns. A solo foresight strategist doesn’t have a Product Development team. Every engagement would be linear, not exponential. Scaman calls this “artisanal consulting”: brilliant bespoke work that doesn’t compound.

Unless you have AI.

AI tools change the calculus for solo practitioners. Think about what an encoding layer could look like in practice: six months embedded with a client, and every meeting note, stakeholder map, and strategic document feeds into a knowledge system. The foresight strategist leaves, but the system retains the patterns: which arguments moved the board, where resistance clustered, what the actual decision-making logic looked like beneath the stated strategy. At the next client, the system surfaces parallels. The manufacturing company’s innovation theatre looks structurally familiar because the insurance company did something similar two years ago. The foresight strategist doesn’t start from zero.

I’ve started calling this “Documentation as Infrastructure”. Each engagement feeds the system, and the system feeds the next engagement. That’s the beginning of a flywheel.

That’s the theory, at least. The encoding layer is the most fragile part of this model.

Open Questions

This idea raises more questions than it answers. A few that I’m sitting with:

Does it scale? Even with AI as an encoding mechanism, one person can only be embedded in one organisation at a time. The Palantir model works because they deploy teams across hundreds of clients simultaneously. A solo Forward Deployed Foresight Strategist is inherently limited. The question is whether the AI-powered knowledge infrastructure compensates enough to make the model economically viable.

What distinguishes this from an Interim Head of Strategy? The answer, I think, lies in the mandate. An interim fills a role. A Forward Deployed Foresight Strategist brings an external perspective and a specific lens (futures thinking) while remaining structurally outside the org chart. The value comes precisely from not being absorbed into the organisation’s own logic.

Which organisations are ready for this? Scaman’s observation about Palantir applies here: this model requires a certain type of client. One in enough pain to tolerate the intrusion, to let someone see the mess. Most organisations haven’t reached that point with foresight. They’re still in “hire a consultancy and shelve the report” mode.

What about the encoding problem? AI helps, but it doesn’t fully solve it. Pattern recognition across engagements, building a genuine “private pattern library” of organisational futures challenges: that’s a hard problem. The tools exist. The practices are emerging. But turning embedded experience into systematic, reusable knowledge at quality is still more aspiration than reality.

The dependency trap. Here’s the uncomfortable part. Palantir’s FDE model is economically viable precisely because it creates what the industry calls “near-unchurnable accounts.” The deeper the embedding, the harder it is for the client to leave. That’s not a bug in Palantir’s model. It’s the business model. A Forward Deployed Foresight Strategist who aims for capability transfer is working against the mechanism that makes the original architecture profitable. My work has always been oriented towards the opposite: helping organisations build their own capacity, not making them dependent on mine. That tension doesn’t resolve easily. The question is whether there’s an economic model that rewards depth of embedding without requiring permanent dependency.

The Building Blocks

If I were sketching the building blocks, they would look something like this:

Time: Months, not weeks. Enough to move past stated problems to revealed ones.

Access: Not just the strategy department. Across functions, up and down the hierarchy, into the meetings where actual decisions happen.

Mandate: Diagnosis paired with implementation support. The Forward Deployed Foresight Strategist stays through the point where insight becomes action.

Infrastructure: AI-powered documentation and knowledge systems that capture patterns, build institutional memory, and create the encoding layer that makes each engagement compound.

Exit design: The goal is capability transfer. The organisation should be able to continue the work after the foresight strategist leaves. This is where the model deliberately breaks with Palantir.

What I keep coming back to is the relationship between the futurist and the organisation. Change that, and the methods we already have might finally land.

Update: Scaman’s Practice

Since writing this, Zoe Scaman described on LinkedIn what happened when three major organisations independently asked her to embed with them. A 160-year-old bank spanning half the world’s time zones. A company that has shaped play and collective imagination across generations. A business whose products three billion people use before their first thought of the day.

None of them had a brief. None of them started from a stated problem. Her words: “coming in with no map, no clear direction, just the capacity to go to the strange places, hold the contradictions, and crack open what might be possible.” That’s the FDE principle applied to foresight, whether or not anyone uses that label.

What she describes isn’t trends work. It’s “threading together emerging technologies with geopolitics, identity formation, cognitive development, the disintegration of societal contracts, the stories cultures use to make sense of themselves.” The outputs vary by client: speculative fiction, deep scenario planning, whitepapers, internal keynotes, firesides. All aimed at starting conversations that normal business doesn’t make space for.

Two things stand out. First, how she got there. Nobody pitched these engagements. The writing did it: “the long, strange, explorative pieces I put out on futures and uncertainty.” Thought leadership as a diagnostic signal. Leaders inside these organisations had been wrestling with the same questions and recognised a mind that could help them think, not someone selling a methodology.

Second, what’s absent. Scaman is running three parallel engagements with some of the world’s largest organisations. Each one produces deep contextual knowledge about how that specific organisation thinks, decides, and avoids. But there’s no mention of patterns flowing between them. No encoding layer. Each engagement appears to exist on its own. That’s the artisanal consulting problem from above: brilliant bespoke work that doesn’t compound.

Which brings me back to the question this whole note is circling. The practice Scaman describes is real. The demand is real. She has the foresight, the access, the trust. What’s missing from the picture is the infrastructure that would make three separate deep engagements compound.

  1. Data from Rocketlane and Pave, both tracking the FDE trend through 2025. 

  2. Palantir’s track record in surveillance, immigration enforcement, and military applications is well-documented and worth your discomfort. I’m borrowing the architecture, not endorsing the architect. Scaman puts it well: “you can admire the blueprint while despising the builder.” Fuck Karp! And apparently, you now can basically vibecode Palantir

No notes link to this note yet.


Note Graph

ESC