One Contract Powers My API, My AI Agent, and My MCP Server
How structured code became a superpower when AI entered the stack
The definition of over-engineering is surprisingly hard to pin down. “We aren’t building Google” is the usual pushback. And it’s fair. Most codebases don’t need the abstractions that make sense at massive scale. But the argument tends to flatten something important: the assumption that structure and speed are in tension. That organizing properly costs you time you could have spent shipping.
I’ve never found that to be true. In my experience, the right foundation makes you faster, not slower. Deriving schemas instead of duplicating them. Extracting shared logic before you have three copies of it. These aren’t detours from building. They’re how you build without accumulating drag.
AI has only reinforced this for me. The habits that critics called premature now turn out to be exactly what language models need to work effectively with your code. Things that used to come at a cost, like spending time on clean contracts and schema derivation, have become a genuine advantage. Not because the philosophy changed. Because the number of consumers did.
The Contract You Already Have
A few weeks ago I wrote about using Drizzle to generate Zod schemas and flowing them into oRPC contracts. The idea was straightforward: your database schema is the source of truth, so derive everything from it. Generate Zod schemas with drizzle-zod, refine them with .pick() and .extend(), and use those directly as your API contracts. One definition. Validated at every boundary.
What I didn’t realize at the time was how much work that contract was already doing. An oRPC contract carries an input schema, an output schema, a route description, and field-level annotations via Zod’s .describe(). I wrote those annotations for OpenAPI documentation. But it turns out they contain everything an AI agent needs to call your tool correctly: what it does, what it accepts, and what each field means.
The contract wasn’t just an API definition. It was a tool specification waiting to be used.
Three Consumers, One Operation
When I started adding AI capabilities to my system, I needed the same business logic exposed in three places. An oRPC endpoint for the frontend to call. An AI SDK tool for my chat agent to invoke. And an MCP tool for external clients like Claude Desktop or Cursor to use.
At first glance, these feel like three different things. Different transports, different protocols, different consumers. But when you look at what each one actually needs, the overlap is structural, not cosmetic. They all validate the same input. They all execute the same business logic. They all need a description of what the operation does. The only real differences are how they receive requests and how they surface errors.
The moment I recognized that, the pattern became obvious. You don’t write three implementations. You write one execute function and let the infrastructure fan it out.
The Pattern
Here’s what a procedure file looks like in practice. It defines the oRPC handler for the HTTP API, and in the same file, derives tools for both AI SDK and MCP from the same contract:
// The shared business logic
const execute = async (input, deps) => {
const [hero] = await deps.db
.insert(heroes)
.values(input)
.returning();
return { success: true, data: hero };
};
// HTTP API endpoint
export const registerHeroProcedure = contract.registerHero
.handler(async ({ input, context, errors }) => {
const result = await execute(input, { db: context.db });
if (!result.success) throw errors.INTERNAL_ERROR();
return result.data;
});
// AI agent tool + MCP tool, derived from the same contract
export const registerHeroTools = deriveTools({
name: 'register_hero',
contract: contract.registerHero,
execute,
});deriveTools reads the contract’s input schema, output schema, and description. It generates an AI SDK tool that can be handed to any agent loop, and an MCP tool definition that can be registered on an MCP server. Both call the same execute function. Both validate input with the same Zod schema. Both describe themselves with the same annotations you wrote for your API docs.
The implementation is simpler than you might expect. The contract already carries everything both tools need:
function deriveTools({ name, contract, execute }) {
// Pull schema and description from the oRPC contract
const { inputSchema, route } = contract;
const description = route.description ?? route.summary ?? name;
return {
name,
// AI SDK tool — used by your agent loop
createAiTool: (deps) => tool({
description,
inputSchema,
execute: (input) => execute(input, deps),
}),
// MCP tool — used by external clients
mcpTool: {
name,
description,
inputSchema: zodToJsonSchema(inputSchema),
execute: (input, deps) => execute(input, deps),
},
};
}One file. Three interfaces. No glue code.
Why This Matters More with AI
There’s a practical reason this pattern matters beyond code hygiene. Language models work with what you give them. When you hand an LLM a tool, it gets a name, a description, and a JSON schema for the input. That’s the entire surface it has to decide whether and how to call your tool.
If your field descriptions are clear, the model calls the tool correctly. If the input schema is precise, you get fewer hallucinated parameters. If the description accurately captures what the operation does, the agent picks the right tool for the job.
All of that comes from the contract you already wrote. The .describe('Hero alias') you added for your OpenAPI spec is the same annotation the agent reads. The Zod enum that restricts alignment to 'hero' | 'villain' | 'antihero' is the same constraint that prevents the model from inventing an alignment that doesn’t exist.
Context window is finite. Every redundant schema, every duplicated type, every inconsistent description is context the model could have spent reasoning about the user’s actual request. Clean structure isn’t just organizational preference when AI is involved. It’s a resource allocation decision.
Composition at the Edges
The piece that keeps this from becoming a rigid framework is where assembly happens. Domains don’t know they’re being used by an agent. They export procedures and tools the same way they export any other module. The application decides what to compose.
import { registerHeroTools } from '@/domains/heroes';
export function createAgentTools(deps) {
return {
register_hero: registerHeroTools.createAiTool(deps),
// ... tools from other domains
};
}import { registerHeroTools } from '@/domains/heroes';
export const mcpToolDefinitions = [
registerHeroTools.mcpTool,
// ... tools from other domains
];Adding a new capability to the system is a single decision point. Write the use case, define the contract, export the tools. The admin app imports them and wires them into the agent, the MCP server, and the API router. Each domain stays independent. The composition root stays in control.
It Comes for Free
The part I like most about this pattern is that it doesn’t impose anything. Not every endpoint needs to become an agent tool. Not every agent tool needs an MCP counterpart. The contract and the procedure work fine on their own as a regular API endpoint. Nothing about the structure forces you to derive tools from it.
But when you want to, there’s no additional lift. The contract already has the input schema, the output schema, and the description. Adding deriveTools is one function call. The boilerplate is basically non-existent. You’re not retrofitting anything or writing adapter code. The work you did to set up a clean API endpoint is the same work that makes it available to an agent or an MCP client.
This is the same instinct toward organization that’s always paid dividends for me. It just keeps compounding. From creating these tools with zero extra effort, to making it more likely that AI gets things right the first time because your schemas are clean and your descriptions are precise. Structure your contracts well for the first consumer, and the second and third are nearly free. You don’t need to plan for three interfaces from the start. You just need to not cut corners on the first one.