Gave a talk at CTO Club Belgium on coding with AI. The session was deliberately interactive — less presentation, more shared experiences. Most CTOs in the room had teams experimenting with AI coding tools, and the conversation quickly moved past the hype into what actually works at scale.
The gap between vibe coding and reality
Everyone wants to vibe code. Most early adopters hit a wall. The tools are impressive in demos but frustrating in real codebases with legacy code, custom conventions, and team dynamics. The gains are real — but only if you use AI the right way.
The session focused on what happens when an entire team adopts AI coding, not just one enthusiast. That’s where the interesting problems show up: inconsistent output, review bottlenecks, and context that doesn’t travel between developers and agents.
Documentation drift kills AI effectiveness
One of the best takeaways came from Lucas Desard, who wrote up his learnings after the session. His key insight:
Documentation drift kills AI effectiveness. Extensive specs and architecture docs create misalignment as code evolves faster than documentation.
The instinct is to write more documentation to help AI understand your codebase. But if that documentation gets stale — and it always does — you’re feeding AI conflicting information. The code says one thing, the docs say another. The AI doesn’t know which to trust.
Lucas shifted to what he calls just-in-time documentation: code is the authoritative source, business logic lives in comments close to the code, large standalone docs get eliminated, and full explanations are generated on-demand using smaller models. Only high-level specs survive for non-technical stakeholders.
This maps directly to the broader shift from static documentation to living context — something I’ve been exploring in the context development lifecycle.
What I took away
The CTO Club format works well for this topic. Everyone’s experimenting, nobody has all the answers, and the practitioners in the room have real war stories. The pattern I keep seeing: teams that invest in their codebase quality (clear conventions, modular structure, good naming) get dramatically more out of AI coding tools than teams that just add a Copilot license and hope for the best.