A conversation with Claude Code on coding practices


❯ Now I'm starting to think that the ideal content creation system is none of the above stuff we talked about, but just Claude Code and GitOps. All of the structure, the coding practices, the code re-use, the design patterns -- that's all just
scaffolding to help humans carry the process through the several steps. But a coding agent can do all of that, up to and including a single deployment command.

⏺ That's a sharp observation. The whole abstraction layer — admin UIs, theme galleries, plugin systems — exists because humans need guardrails to navigate complexity across time and context switches. An agent doesn't need those guardrails in the same
way.

The workflow collapses to something like:

  1. User describes what they want ("make the header dark, add a link to X")
  2. Agent edits templates, CSS, config directly
  3. Agent commits and pushes
  4. GitOps deploys

No theme format, no manifest files, no upload protocol. The agent is the theme editor and the package manager.

The interesting implication is that plantagenet's value proposition shifts. The app itself becomes thinner — just the runtime that serves content — and the "CMS" functionality increasingly lives in the agent interaction layer. The markdown files,
templates, and git history are the source of truth; the web admin is mostly for convenience when you don't have an agent handy.

It does raise one real constraint: the agent needs access to the deployment environment or a CD pipeline. GitOps handles that cleanly — push to main, deploy happens. The agent never needs SSH or prod credentials directly.

Does this change how you're thinking about what features are worth building into the app itself?

❯ A little. To your point on guardrails: it's also true that code organization, while maybe not entirely necessary, is nevertheless beneficial to the AI agent making the changes as well. The agent will typically have a finite context window, and if
the code is organized well, it will use less of that window. This is a parallel of the benefit of good coding practices for humans, making parts of code smaller and easier to understand and reason about.

⏺ Right — the analogy holds almost exactly. A human can't hold the whole codebase in working memory, so we modularize. An agent can't hold the whole codebase in its context window, so the same modularization helps for the same reasons.

And the payoff compounds: well-named small functions are easier for an agent to locate via grep/glob, easier to understand from a short read, and easier to modify confidently without side effects. The agent can make a targeted edit to render_gfm()
without needing to load the entire app.

There's also a secondary effect specific to agents: good organization reduces the number of tool calls needed to understand a change. If the code is tangled, the agent has to chase references across many files before it can act — burning context
and increasing the chance of error or hallucination. Clean boundaries mean a smaller, more self-contained read before a confident write.

So the conclusion is something like: good software engineering practices are justified independently of whether the author or editor is human or AI. The audience changes, but the underlying reason — minimize the cognitive/contextual load required
to make a correct change — stays the same.

lol, "we modularize"