It’s about two weeks after the Context Is the New Code presentation at AIE London. I called it “an unpolished thought” on stage, because that’s what it was. I’d drawn an infinity loop on a slide, and apparently numbered the steps 1-4-3-2 instead of 1-2-3-4, which the internet has been kind enough to point out roughly 200 times. Fair. The diagram was, in fact, vibe-coded.
What I did not expect is what happened next.
The video crossed 60k views in its first 10 days. People translated it into Korean, Japanese, Chinese, and Arabic, not because anyone asked them to, but because the framing seems to resonate independently in each ecosystem. The acronym CDLC, Context Development Lifecycle, started showing up in places I had nothing to do with. Multiple authors independently rendered the framework in 4, 5, and 7-stage variants. Patrick-from-2009-DevOps recognizes the pattern: when other people start adding their own stages, the idea has left the building. That’s how you know it’s working.
The talk, in one paragraph
If you haven’t watched the video, the thesis is plain. As coding agents get more capable, the bottleneck shifts. It’s no longer how fast we type. It’s how well we can describe to an agent what we want, why we want it, and how it should behave in our actual messy systems. Code has version control, review, testing, CI/CD, observability. Context (the prompts, skills, instructions, knowledge we feed agents) has none of that, yet. We’re in the cowboy era of context. The Context Development Lifecycle is just the obvious move: generate, evaluate, distribute, observe, repeat. I drew it as an infinity loop because the loop never ends. And apparently to give the audience a numbering puzzle. You’re welcome.
The companion piece on the Tessl blog is The Context Flywheel: why the best AI coding teams will win on context. That’s where the longer-form competitive argument lives: models are commoditising, tools are converging, the two years of continuously refined context you’ve accumulated is the part nobody else has. If nobody owns it, it rots.
The longer write-ups on the Tessl blog
The talk only had 25 minutes. The posts on the Tessl blog are where the practitioner detail lives, for the engineers who nodded along during the keynote and then quietly asked “ok, but how, on Monday morning?”:
- The Context Flywheel: why the best AI coding teams will win on context. The longer-form competitive case. The line I keep coming back to: “Better context produces better agent output. Better agent output generates better signals. Better signals produce better context.” And the unglamorous prerequisite buried in the middle: if nobody owns it, it rots.
- Context Development Lifecycle: better context for AI coding agents. The canonical write-up of the lifecycle itself. This is where “every session is a new hire” gets spelled out properly. The angle I’d flag for anyone in a senior eng role: in this world the incentive is finally aligned with the work. Developers actually want to write better context, because it directly improves the agents they personally rely on.
- CI/CD for context in agentic coding: same pipeline, different rules. If you try to run evals like unit tests, you’re going to have a bad time. Three takeaways from re-reading it: (1) error budgets, not pass/fail gates, because LLM non-determinism breaks binary outcomes; (2) adding instructions doesn’t just add behavior, it changes behavior, and the ripple effects are the rule; (3) context staleness is silent, so you need scheduled independent eval runs to catch the drift.
- Context maturity for AI coding teams. Three dimensions that have to mature together (agents and tools, context itself, people and organization), and five stages from manual to repeatable to automated to self-improving. The failure pattern I see most often is teams that distribute context before testing it. “We shared our rules across teams and things got worse” is a real signal, and it almost always means distribution happened before validation. Start with an audit.
If you came for the slogan and stayed for the practice, those four posts are the practice.
What other people have been writing
This is the part that genuinely overwhelmed me. The community didn’t just nod, they extended, translated, criticised, and packaged the idea. A non-exhaustive walk through the writing that I keep going back to:
The faithful long-form renderings
- Kushal Banda on Towards AI: “Context Is The New Code” is the closest thing to a written companion to the talk. He lifts the “every session is a new hire” framing and lands the line that I should have said more clearly on stage: “Every eval failure is a specification you didn’t write.” He also makes the point that infinite context windows do not save you, they make conflicts worse, not fewer. Curation becomes governance.
- Conffab frames it as the next DevOps moment, with a clean restatement: “If context is the new bottleneck, then we need a development lifecycle built around it.”
- Boden Fuller compresses the argument into a phrase that does a lot of work: “context needs its own engineer.”
People who rewrote the cycle
This is the part that surprised me most. I drew four stages. Within two weeks, the loop had been redrawn at least three different ways, and the variants are not all wrong.
| Source | Stages | Notes |
|---|---|---|
| My original | Generate → Evaluate → Distribute → Observe | 4 stages, infinity loop, numbered 1-4-3-2 on the slide (sorry) |
| 12factoragentops and Boden Fuller | Generate → Compile → Test → Distribute → Deliver → Observe → Adapt | 7 stages; more honest about the build/release split that I was hand-waving past |
| Note.com (JP) and Taeho.io (KR) | Generate → Test → Distribute → Observe → Adapt | 5 stages; Adapt pulled out of Observe as its own beat. The right call |
| Vinay Krishna | Generated → Tested → Versioned → Distributed → Observed | Reframes the loop as five attributes of context rather than stages; explicit “Versioned” is sharper than what I said on stage |
| Artem Zverev | Generate → Evaluate → Distribute → Observe | Same 4 stages, each instrumented with concrete artifacts (agent.md, CLAUDE.md, MCP, linters, registries, prompt-injection filters) |
| Several others | Observe → Generate → Evaluate → Distribute | Same loop, different starting point. Makes sense if you’re consuming context rather than authoring it |
Patrick-from-2009-DevOps already learned this lesson once: the moment someone says “well, in our org we actually do it as ops, dev, ops, dev, ops…”, the framework has stopped being yours. That’s a feature.
The other extensions worth your time
- Jarosław Wasowski on Medium introduces “context debt”, a Cunningham-style sibling of technical debt. He opens with a Friday-evening failure scenario where a 35-minute agent run corrupts an invoicing module because file #4 in a grep list landed in the “attention dead zone” of a 60-percent-full context window. It’s a clinical bit of writing and the term is going to stick.
- baz.co’s cyber.md proposal applies the CDLC frame to security posture: a Markdown-based security guidance file that agents consume during development. Citing it because nothing makes you feel more validated than seeing your framework picked up in domains you weren’t writing for.
- themoltnet’s README on GitHub lists the CDLC as a related project alongside other agent-memory systems. The ecosystem is starting to network.
The community on LinkedIn
- Samuel Flender (Apple) credits the Context Flywheel as one of the most important growth drivers for eng teams in the AI coding era, and raises a question I don’t yet have a good answer for: how do you maintain and version-control a shared corpus of agent skills across an org?
- Artem Zverev takes the framework all the way to concrete artifacts: agent.md, CLAUDE.md, MCP connectors, context linters, LLM-as-a-judge, skill registries, prompt-injection filters. He coins context-as-a-WAF, which is sharper than anything I said on stage.
- Pawel Wiacek (Alokai) pairs the talk with Boris Cherny’s Sequoia AI Ascent appearance and lands one of my favorite formulations: “Cherny is the printing-press picture. Debois is the Monday morning.” I am going to use that, with credit.
- Vinay Krishna renders the thesis bluntly: “Code was the source of truth. Now context is the source of truth, code is just the output.” He calls it CI/CD for context, which is the right mental model.
- Dennis Traub names the three context layers (technical, project, business) and raises the freshness problem: “a best practice encoded a few months ago may be wrong today.” If you’ve ever wondered why your agent quietly started doing the wrong thing six weeks after you wrote a “best practice” doc, that’s the diagnosis.
- Mubashir Ali Baig wrote a full Pulse article on CDLC as the new AgentOps paradigm without using AI to write it. He pitches CDLC as a shared, organization-wide context discipline. The framing of “what differs an experienced developer from a new joiner is essentially a matured context of three main things” is exactly the angle I was groping toward.
- Brandon Wooding (Imperial College), Som Rout (Lloyds/Wipro), and Tech Talks Weekly all did substantive recaps from the conference floor. Tech Talks Weekly ranked the video the #1 software engineering talk of the week, which is exactly the kind of detail I had no business expecting.
The international translators
- Wikidocs (Korean) and Taeho.io (Korean) carried it into Korean.
- Note.com (Japanese) renders all five stages with the engine/fuel metaphor intact and surfaces the 99.9% line about public skill quality, which apparently translates well.
- xiaoyuzhoufm and Podwise EP47 made it into Chinese-language podcast recaps.
- Nawaf Alwadaani wrote it up in Arabic.
- DavidKo Learning Journey covered it in Traditional Chinese with a headline that translates to roughly “AI code always goes wrong? DevOps Father: the bottleneck isn’t AI, it’s your rule management.” I cannot improve on that.
The full coverage trail runs to about 30 pieces and counting, listed at the bottom of this post. The point isn’t the count, the point is that the diversity of framings is doing the work that no single talk could.
What the community taught me back
A few honest observations after watching the responses come in.
Ordering varies, and that’s fine. I drew Generate, Evaluate, Distribute, Observe. Several smart writers came back with 7-stage variants. Others lead with Observe. Turns out people walk this loop differently depending on whether they’re authoring a library, consuming one, or running production. AI knew it all along.
“Context debt” is the keeper. I wish I’d thought of it first. I’ll be citing Wasowski.
The biggest “but what about” is the 1AM pager. Multiple people pushed back with: “fine, but when production breaks at 1AM, am I really updating context and waiting for evals?” The answer is no, you fix the code. Then you improve the context so the agent doesn’t make that same shape of mistake again. CDLC is not a replacement for SDLC. They run in parallel. I should have said that on stage more loudly than I did.
The skeptics are mostly right about hygiene. A long pushback thread on YouTube argued that a lot of what context engineering covers is really just better linting, better docs, and better deterministic guardrails. Largely true. The CDLC isn’t trying to replace those. It’s trying to bring the same hygiene we already apply to code over to the artifacts that drive our agents. Stating the obvious sometimes helps.
Where I’m taking this
The next thing I want to spend time on isn’t another framework. It’s the field research. What I keep hearing in conversations is that the interesting work is happening inside organizations right now: which practices around CDLC are actually changing how engineering teams operate, and which are just slideware. So I’m spending the next quarter collecting patterns from teams that are scaling agents across more than a handful of devs. Less “here’s a model,” more “here’s what twelve teams are quietly doing that works.” If you’re in one of those teams, I’d love to talk.
The three pillars I’m looking at, in case you want to compare notes: enablement (team and org fluency with agents), platform (agent tooling that behaves like a real delivery pipeline), and governance (evaluation, telemetry, accountability). Coding agents don’t scale themselves. That’s the next talk. More on what I’m exploring on Coding Agents Don’t Scale Themselves. Neither Do Your Teams. The Rise of Agent Enablement.
Related plug, because I curate the program and I’m not going to pretend I don’t: if you want a venue to actually hash this stuff out in person, come to AI Native DevCon in London, June 1st and 2nd. I have biased opinions about which AI-and-coding conferences are best right now, and this one is mine. Promo code PATRICK50 at checkout.
Until then: thank you for watching, translating, arguing with, riffing on, and yes, pointing at the diagram numbering. Keep doing it. That feedback loop is the lifecycle in action.
Talk soon.
From the YouTube comments
The video has ~65 comments. Agreement outweighed dismissal roughly 2:1, and even most of the skeptics converged on “hygiene matters.” A representative slice, lightly grouped, with my replies where I had one.
| Bucket | Commenter | Their take | What I said back |
|---|---|---|---|
| Diagram meme | @Chris-se3nc, @hey-aleksei | “Dude vibe coded the infinity loop.” / “3:18 man cannot be serious with a vibe-coded diagram where steps go 1-4-3-2.” (32 likes — the most upvoted thread in the comments) | “It keeps the audience busy :)” — and it turned out to be a real point: people walk the loop in different orders |
| Hygiene pushback | @futuregovernance9584 | The /awesome prefix is really a linting concern; a lot of context engineering is just deterministic hygiene we should already be doing | “The point is not to replace it, it is better hygiene on what we feed LLMs with.” Commenter conceded: “hygiene is important I agree” |
| The 1AM pager | @Ysunio | “If I get paged because of a bug at 1AM, am I really updating context and waiting for evals to pass in CI/CD?” | “There still is code, and if it needs fixing you still tell it to fix code. Then improve the context to prevent it next time. I did not imply all code will be done.” |
| No demos | @DataFlowsMaster | “Incredible this has 59k views without a single demo. Everything is static samples and semi-fancy diagrams.” | Fair |
| Reinventing code | @craevzopl | “This sounds like reinventing code. I mean, dependency hell in context?” | Partly the point — context needs the same disciplines we built for code |
| Möbius, not infinity | @TechnoMageCreator | The loop is closer to a Möbius strip; they use that shape to guide agents | Worth thinking about |
| Library-First pointer | @st.chiotis | Pointer to “Library-First Engineering” as a related approach | “Interesting pointer!” |
| Substantive add | @bramburn | Built a homegrown conversation summariser that “inserts relevant information in the context to add more guardrail what was done in the past” | “The advocate is not perfect code through specs. It is that the agents behave better with better context. So we should have a lifecycle for it.” |
| Open weights counter | @NewMoralArchitecture | “Weights are the new Context. You NEED open models.” | Still chewing on this — the open-vs-proprietary dimension is the part I haven’t worked through |
| Enforcement gap | @catzshort | “Context is still suggestive not enforced at runtime.” | “That’s for skills replacing code. But you can limit the behaviour by using deterministic scripts.” |
| The joke that won | @corruptedknight0 (72 likes), reply @TheRealCornPop | “2060: Context is dead, code is the new context.” — “2060? Given how quickly things change in tech, I’d expect 2028.” | — |
| The joke I joined | @HenriqueAraujo174 | “We are back to coding, just in a different flavor.” | “It’s code Jim, but not as we know it!” |
The most viral detail in the whole comment section is the 1-4-3-2 numbering, which I will be carrying with me for the rest of my career. Somewhat ironically, that’s the talk’s premise in miniature: small context errors propagate further than you’d think.
Full coverage trail
The full list of pieces I’ve come across, in case you want to read past the highlights above. Roughly grouped, lightly opinionated.
Long-form articles and write-ups
- Kushal Banda, Towards AI: “Context Is The New Code”
- Conffab: The Context Development Lifecycle
- Boden Fuller: CDLC
- StartupHub.ai: AI Engineers, Context is the New Code
- EveryDev.ai: DevOps for Context Engineering
- Thinkata: Context Is New Code (parallel framing, doesn’t cite the talk)
Framework variants and extensions
- 12factoragentops.com / CDLC (7-stage variant)
- Jarosław Wasowski on Medium: Managing Agent Context, CDLC + SDD (introduces “context debt”)
- baz.co: cyber.md, AI-Native Posture That Speaks Agent (CDLC applied to security posture)
- themoltnet on GitHub (lists CDLC as a related project)
- Mubashir Ali Baig, LinkedIn Pulse: CDLC, A New Paradigm in AgentOps
LinkedIn ecosystem
- Samuel Flender (Apple): the Context Flywheel
- Artem Zverev: CDLC with concrete artifacts
- Pawel Wiacek (Alokai): “Cherny is the printing press, Debois is the Monday morning”
- Vinay Krishna: CI/CD for context
- Dennis Traub: three context layers and the freshness problem
- Brandon Wooding (Imperial College): AI Engineer Europe recap
- Som Rout (Lloyds / Wipro): full conference recap
- Tech Talks Weekly: #1 software engineering talk of the week
- Aurimas Griciūnas (Swirl AI): State of Context Engineering 2026 (CDLC named in the top comment by Nicolas Boitout)
- LinkedIn share of the Mubashir Ali Baig Pulse piece
- Alan Walsh: agentic workflows vs n8n (not actually about CDLC, included for completeness)
International translations and write-ups
- Wikidocs (Korean)
- Taeho.io (Korean): five-stage rendering
- Note.com (Japanese)
- Nawaf Alwadaani (Arabic) on LinkedIn
- DavidKo Learning Journey (Traditional Chinese, Facebook)