Everyone says AI will let us rewrite systems overnight. Just point an LLM at your codebase and watch it transform your legacy monolith into beautiful microservices.
They’re wrong about what makes that hard.
Here’s the thing. Code was never the hard part. Understanding intent was. And LLMs don’t magically solve that problem. They just shift where it lives.
The New Interface
LLMs don’t just generate code anymore. They act on systems.
Tools like Model Context Protocol (MCP) and CLI integrations let LLMs connect directly to databases, APIs, and file systems. The AI doesn’t write code that queries your database. It queries your database directly.
Think about what that means. The interface layer is shifting:
Old world: Human → Code → Database
New world: Human → LLM → MCP → Database
The code layer is becoming optional for many operations. An LLM can read your schema, understand your data, and answer questions without anyone writing a single line of code.
But here’s the catch. What does the LLM need to work well?
It needs context. Schema documentation. Relationship descriptions. Business rules like “never delete a customer with active orders.” Context about what the data actually means.
If an LLM can query your database directly, what becomes the new “code”? The documentation.
The Ambulance Problem
Let me tell you about a real system I’ve seen.
A company built a product for ambulance dispatch services. Their database had tables like ambulance_dispatch, ems_response_codes, ambulance_units. Made perfect sense at the time.
Then the company grew. They expanded to police departments. Fire stations. Private security firms. The business evolved to serve all kinds of emergency services.
The schema didn’t.
Now an LLM looks at ambulance_dispatch and has no idea it’s actually handling police calls. It sees ems_response_codes and assumes this is about medical emergencies, when half those codes are for traffic incidents.
Your table names are the first thing an LLM reads. If they tell the wrong story, every query it writes will be fiction.
This is the thing about technical debt that’s changed. Before, messy schemas just made code harder to write. Developers would grumble, write some translation layer, and move on.
Now messy schemas make AI unusable. The LLM can’t ask a human “hey, does ambulance_dispatch actually mean emergency services in general?” It just assumes. And it assumes wrong.
Documentation Becomes Infrastructure
I’ve been thinking about this a lot lately. Most enterprise “documentation” falls into a few categories:
PowerPoint decks. Five bullet points that made sense when someone was presenting them. Without the speaker, without the context of the room, they’re meaningless. An LLM can read the words. It can’t read the conversation that happened around them.
Confluence wikis. Created with good intentions. Updated twice. Now three years stale and nobody knows which parts are still accurate.
Tribal knowledge. The stuff that lives in people’s heads. “Oh, that field? It says ‘status’ but really it means…” This walks out the door every time someone leaves.
Here’s the uncomfortable truth: that PowerPoint deck isn’t documentation. It’s a souvenir from a meeting. An LLM needs the conversation, not the slides.
What Actually Works
Some formats are LLM-readable. Markdown files in your repo. Architecture Decision Records (ADRs) that capture the why, not just the what. Code comments that explain intent. API documentation that describes relationships.
The companies that invested in these formats? They’re accidentally AI-ready. The ones that didn’t? Their data is a black box. They can’t use MCP. They can’t use agents. They’re locked out of the AI tooling revolution because their context isn’t accessible.
The Inversion
Here’s what’s changed:
Before AI, documentation helped onboard humans. It was nice to have. Optional. Often skipped.
After AI, documentation enables AI to act. It’s required for AI workflows. It’s not a static artifact anymore. It’s active infrastructure.
Documentation went from cost center to competitive moat.
The Transcript Goldmine
Most decisions happen in meetings. And most meeting notes capture maybe 10% of what was actually said.
Think about what gets lost. The alternatives that were discussed and dismissed. The concerns people raised. The “well, actually…” moments that shaped the final decision. The political context. The tradeoffs that were considered and rejected.
ADRs capture that a decision was made and the official rationale. But they rarely capture why it almost went the other way.
Meeting transcripts capture all of that. And they’re already LLM-readable. Plain text. Full context. Who said what and when.
ADRs tell you what was decided. Transcripts tell you why it almost wasn’t.
Companies sitting on months of meeting transcripts are sitting on context goldmines. Most of them don’t even realize it. The recordings exist, maybe auto-transcribed, sitting in some folder nobody looks at.
The companies processing those transcripts into searchable, LLM-readable knowledge bases? They’re building moats their competitors can’t see. When their LLM needs to understand why the system works this way, it can find the actual conversation where that decision was made.
The New Lock-in
So we avoided vendor lock-in by being careful about which platforms we depend on. We learned about internal lock-in (the kind I wrote about in Build vs Buy Is the Wrong Question) and started building for replacement.
Now there’s a new form: AI tool lock-in.
If you build sophisticated prompts that work perfectly with Claude, do they work with GPT? With Gemini? With whatever model comes next year?
Your system prompts encode knowledge. Your MCP configurations encode how to access your systems. Your agent workflows encode business logic. Is this the new proprietary code?
Here’s how I think about what’s portable in the AI world:
High portability: Your data (in standard formats). Your documented decisions. Your API contracts. These transfer between any AI system.
Medium portability: Your prompts. They’ll need rewriting, but the intent transfers. Keep them simple. Document what they’re trying to accomplish, not just what they say.
Low portability: Your agent workflows. Accept that these will be rewritten. Don’t over-invest in making them perfect.
Zero portability: Model-specific tricks. That clever hack that makes Claude do exactly what you want? It probably won’t work on the next model. Don’t build your architecture around it.
The principle stays the same as it always has. Invest in the portable layers. Data, schemas, documentation, intent. The AI-specific stuff? Treat it like code. It will be rewritten.
Context Outlives Prompts
In Data Outlives Code, I argued that your data is the real asset. Code gets rewritten. Data migrates.
In Build vs Buy Is the Wrong Question, I argued that lock-in is inevitable, so build for replacement. Design systems that can die gracefully.
Here’s the pattern that connects all three: invest in what survives transitions.
Code gets rewritten. Now LLMs make that even faster.
Prompts get rewritten too. Every new model, every capability update, you’ll be tweaking them.
But context? Your data. Your documented decisions. Your captured intent. The record of why things are the way they are. That survives.
The companies that will thrive in the AI era aren’t the ones with the cleverest prompts or the most sophisticated agent workflows. They’re the ones with the cleanest data, the best documentation, and the richest context.
Context outlives prompts. Invest accordingly.
This is the final part of a series on what lasts. Start with Data Outlives Code, then Build vs Buy Is the Wrong Question.