Skip to content

From DevOps to AI-Native: Rethinking Software Delivery

talks

In this TechRox podcast episode recorded ahead of the TechRox Summit, Patrick Debois sits down with host Dimitri Bi to discuss the journey from DevOps to AI-native development. Patrick recounts how boredom with the plateauing DevOps conversation, combined with explorations in the metaverse, digital twins, and gaming automation, naturally led him into the generative AI space. He describes AI as fundamentally an “integration game” — unlike traditional machine learning, it does not require deep mathematical expertise, making it accessible to anyone comfortable calling APIs and writing prompts.

Patrick describes two personal waves of AI engagement. The first was AI platform engineering: bootstrapping a GenAI team at a company, applying platform engineering principles to scale AI capabilities across developers through blueprints, catalogs, versioning, and test frameworks. The second wave, which forms the core of his upcoming talk, focuses on how AI is reshaping the software delivery lifecycle itself — not putting AI inside products, but using AI to build products more effectively. He notes that tools like Cursor made a significant dent in sparking the ecosystem, and that the rapid pace of change means the tooling landscape shifts every few months.

The talk previews Patrick’s four patterns of AI-native development. First, developers shift from producer to manager — they become reviewers and managers of AI agents that write the code, much like ops teams have always done when receiving code they did not write. Second, there is a move from implementation to intent, where developers focus on specifications and architecture rather than code details. Third, the shift from delivery to discovery means developers increasingly act like product owners, using vibe coding and rapid prototyping to explore what should be built rather than just shipping what was specified. Fourth, developers move from content creation to knowledge management, capturing and preserving institutional learning across the development process.

Patrick offers a nuanced take on productivity measurement. He deliberately avoids chasing productivity metrics, noting that speed gains in code generation often shift to increased review time. Instead, he focuses on understanding where AI works well and where it breaks down, advocating for intentional overuse as a way to map the boundaries. He touches briefly on the continued relevance of DORA metrics — if you are still delivering to production, they still apply — and emphasizes that as the development cycle itself changes (from one agent to twelve agents coding asynchronously), CI/CD pipelines will need to fundamentally evolve. The conversation highlights how barriers are breaking down not just between people (as DevOps did between dev and ops) but between entire systems and ways of working.

Watch on YouTube — available on the jedi4ever channel

This summary was generated using AI based on the auto-generated transcript.