
Every time AI removes one bottleneck, the system reveals the next one. The useful question isn’t “how much faster” but “what breaks next.” That question has been my compass. Intentional overuse has proven a good learning strategy for me.
Yes, if all you have is a hammer, everything looks like a nail. That’s the criticism, and it’s fair as a permanent way of working. But as a learning strategy, you want to use the hammer on everything. That’s how you find out what’s a nail and what isn’t.
That’s what I’ve been doing: intentional overuse. Start using AI tools in anger, for real work, on real deadlines, with real consequences. And when it doesn’t work, don’t walk away. Will it to work. Rephrase, restructure, add context, try a different approach. The struggle is the point. That’s where you learn what the tool actually needs from you versus what it can’t do at all.
Learning happens at the edge
Everyone’s learning style is different. I learn by doing. For me, you learn at the edge of your ability, not in the center of it. Ericsson called it deliberate practice, Vygotsky called it the zone of proximal development. Same idea: push just beyond what’s comfortable, get feedback, adapt, push again. When you can fall back to the old way, you will, every time the new way gets hard. Intentional overuse removes the fallback and forces you into the gap where the learning lives.
Applying this to AI tools
- Use AI for tasks you know how to do. This is where you can evaluate quality. You already know what good looks like, so you can spot where the AI falls short. If you only use AI for unfamiliar tasks, you have no baseline.
- Use AI for tasks it’s probably bad at. Nuanced architectural decisions. Subtle performance optimizations. Push into the uncomfortable zone. The failures are the data. I tried using GenAI for OCR. It got better and better at extracting the text itself, but positioning and layout? That’s where it fell apart, and I ended up needing something like PaddleOCR. That’s a mapped boundary.
- Use AI beyond code. I use it for docs management, content publishing, site operations. This blog is built and maintained almost entirely through AI agents. That’s where you discover that the bottlenecks aren’t code-specific. They’re workflow-specific.
- Will it to work. When an agent mangles a refactoring, don’t switch to doing it by hand. Try again with better context. Add a CLAUDE.md. Break the task differently. I’ve had sessions where the fifth attempt worked, not because the model improved, but because I finally gave it what it needed. That’s the learning.
- Keep a failure log. Not a mental note, an actual log. What did you ask for? What did you get? Where did it diverge? Patterns emerge fast.
- If it’s hard, do it more often. Sound familiar? It’s the same principle that drove continuous integration and continuous delivery. If deploying is painful, deploy more, until it isn’t. If AI-assisted refactoring keeps going wrong, do more of it. The friction is the signal. Lean into the hard parts until they become routine.
- Take things apart. It’s the tinkerer mentality. When something works, break it open and understand why. When something fails, break it open and understand why. Change one variable at a time. Swap the model, change the prompt structure, remove context, add context. The people who learn these tools fastest are the ones who can’t leave well enough alone.
- Scratch an itch you care about. People ask me how to get started. My answer: pick something that actually bothers you. A side project, a workflow that annoys you, a tool you wish existed. If you don’t care about the outcome, you won’t push through the hard parts. Caring is what keeps you in the struggle long enough to learn.
What I found
The more I use it in anger, the clearer the picture gets. Code generation itself is no longer the problem. It’s gotten reliable: single-file, multi-file, refactoring, boilerplate, tests. The models are good enough. That boundary has moved.
The real boundaries are everything around the code:
- Review. AI generates code fast. Humans review it slow. The bottleneck flipped. I found myself spending more time reading agent-generated diffs than I ever spent writing code. And the think tax compounds. I caught myself approving changes I hadn’t fully understood, simply because the tests passed. That’s when I realized the review process itself needed rethinking, not just the code generation.
- Context supply. The quality of what goes in determines the quality of what comes out. I kept getting confident garbage from agents until I started investing in
CLAUDE.mdfiles and structured project rules. The moment I gave agents proper context (coding standards, architectural decisions, project-specific conventions) the output quality jumped. Context engineering is becoming its own discipline because this is where most failures actually happen. - Parallelism, merging, and context. One agent is manageable. Running multiple agents in parallel on the same codebase (one researching, one writing, one backfilling content) taught me that each agent operates with its own partial view. They don’t share context. They make conflicting assumptions. I had agents generating links using one naming convention while the build system expected another. The question that reveals the boundary: what would it take to run 5 agents in parallel? How about 50? Or 500? The coordination overhead between agents is the real cost nobody talks about.
- Memory and knowledge capture. Every session starts fresh. I’d solve a codestyle convention issue, hit the same problem three sessions later, and solve it again from scratch. The decisions, the dead ends, the reasoning all evaporated between sessions. Claude Code now maintains a
MEMORY.mdper project that persists, and that helped. But it’s early. This is the most underestimated bottleneck. - Security. The more I let agents do, the more I noticed the attack surface growing. I went deep on sandboxing agents and intercepting prompt injection at the syscall level, not because I read about the risks, but because I felt them when running agents with real file system access on real projects.
The model isn’t the bottleneck. The surrounding workflow is.
Know when you’ve mapped the boundary
Intentional overuse is a learning strategy, not a permanent workflow. It’s a craftsman learning a new material. You push it, bend it, break it, until you develop an intuition for what it can and can’t do. A woodworker knows the grain of a piece of wood before they make the first cut. That knowledge comes from working with a lot of wood, including the pieces that split.
The goal is that kind of intuition for your specific tools in your specific codebase. Not a general opinion about whether AI is “good” or “bad” at coding. Those generalizations are useless. The feel for the material is everything.
The boundaries shift with every model update, every tool improvement. What was unreliable six months ago might be reliable now. The map needs constant updating, which means you never fully stop pushing.
And there’s an unexpected reward. When you’ve hit the limits yourself, you start to appreciate the industry solutions differently. You see a new tool solving the exact merge problem you struggled with, and you understand why it matters, not because someone told you, but because you felt the pain. Hitting the boundaries first makes you a better judge of what’s genuinely useful versus what’s just marketing.
You don’t learn the edges of the map by staying in the center.
The teams chasing productivity metrics will plateau. The teams mapping boundaries will know exactly where to apply pressure, and where to hold back.
Community reactions
The LinkedIn discussion surfaced some sharp observations:
- Bryan Finster drew a direct parallel to continuous delivery: “Same way I learned CD.” Short, but exactly the point. The principle of leaning into discomfort to build skill isn’t new. It’s how the DevOps community learned to deploy daily instead of monthly.
- Neil Douek picked up on the scaling dimension: “Running 5 agents is interesting. Running 50 becomes architecture. Running 500 becomes governance.” The bottleneck shifts from tool skill to coordination design as you scale up agent parallelism.
- Pete Hodgson shared a great analog: a teammate who enforced “no-mouse Fridays” by physically collecting mice and trackpads. Hated it, but it worked. Same principle: remove the fallback, force the learning.
- Robert Westin noted that infrastructure has been a weak spot for models, though Opus 4.6 is closing the gap. System-level reasoning remains an open boundary.
- Alexandru Gavrilescu made an underrated point: “Who struggles with AI today will learn very important lessons that won’t be possible anymore with the next generations of models.” The struggle itself is time-limited. As models improve, some boundaries disappear, and with them the chance to learn what those boundaries taught you.
- Nnenna Ndukwe has been doing the same thing independently and documenting everything. The pattern is the same: use it, hit the wall, write down what you found.
- Dalton C. described a three-stage evolution: high-surveillance use for safe tasks, then pushing limits with micro-management, then running a dedicated “crash test project” alongside real work. Two terminals: left is production, right is disposable. The disposable project gives you free rein to test high-risk scenarios with no consequences. Smart setup.