Skip to content

Unlearning, Experimentation and Engineering Rigor in an Agentic World

talks

In this ThoughtWorks Technology podcast episode, Patrick Debois joins Nathan Harvey (DORA, Google Cloud) and host Ken McGrath for an in-person conversation about what changes when AI agents enter the software development lifecycle. The discussion centers on a core tension: how do experienced engineers unlearn established habits while maintaining the engineering rigor that keeps systems reliable?

Nathan Harvey introduces the “amplifier effect” from DORA’s research — AI amplifies whatever is already happening on a team, for better or worse. Teams with good flow and solid practices ship faster with AI, while teams with bottlenecks like slow code review processes feel even more pain as AI-generated code floods through the same constraints. Patrick builds on this by arguing that the risk profile of what you are shipping matters more than rigid adherence to best practices, and that AI serves as a powerful tutor for accelerating the learning curve when picking up new languages or frameworks.

A significant thread throughout the conversation is the paradox of accumulated knowledge. Patrick observes that senior engineers accumulate so many internal checks and standards that they can actually become slower than someone who simply ships. The ability to unlearn — to let go of ingrained habits that no longer serve the current tooling landscape — becomes a critical skill. Nathan draws an analogy to cars replacing horses: you do not need to have written code by hand to be a software engineer in the future, just as you do not need to ride a horse to drive a car.

The talk explores how specifications and context are becoming the new code. Patrick argues that engineering effort is shifting toward providing the right context for AI agents — through spec files, documentation, and rules — rather than writing implementation code directly. Nathan notes the fascinating side effect that senior engineers are now writing down their development practices in shared agent configuration files, inadvertently creating learning resources for juniors. Both speakers emphasize that context management, including avoiding context pollution and practicing progressive disclosure, is an emerging discipline in its own right.

On organizational strategy, the speakers agree that managers must create space for experimentation rather than rushing to standardize on a single tool. Patrick recommends providing budget, removing friction, finding internal champions, and accepting that whatever practices feel current will be outdated in months. Nathan adds that celebrating failures openly is just as important as showcasing successes, and that a clear, well-communicated AI stance — even if imperfect — reduces the stress on engineers who might otherwise use AI tools in a gray area. The conversation closes with Patrick predicting that knowledge management will become the dominant concern in the AI-native era, while Nathan urges organizations to be intentional about whether they are incrementally improving existing delivery processes or fundamentally reimagining them from scratch.

Watch on YouTube — available on the jedi4ever channel

This summary was generated using AI based on the auto-generated transcript.