Been pondering on what I call the “AI Coding Fabric” — a new infrastructure challenge emerging as AI agents move beyond traditional IDEs into sandbox environments. Platform engineers need to think about:
- Agent access rules — who can do what, where
- Spec registries — shared, versioned specifications
- Code-specific guardrail rules — beyond generic safety, actual coding constraints
- Agent/MCP discovery portals — how agents find and use available tools
- Code/Agent firewalling — network and filesystem isolation
- Code agent observability — what are agents actually doing?
- Code knowledge & data sources bill of material — tracking what feeds into generated code
The fragmentation problem
Right now developers choose between: LLM selection, agent framework (Claude Code, Codex), interface, add-ons (MCP), planning framework. These are all separate decisions. Eventually these consolidate into platforms, but we’re not there yet.
What’s missing
From the conversation, additional gaps people identified:
- IAM/RBAC for agents — identity and access management isn’t just for humans anymore
- DAST/SAST/SCA security scanning integrated into agent workflows
- Agent linters — checking agent behavior, not just code output
- Cost management — agents can burn through tokens fast
- Formal programming languages for agents — beyond natural language prompts
The pattern is familiar from DevOps: first we build the tools, then we build the platform that ties them together. We’re still in the “build the tools” phase.
Originally posted on LinkedIn