From OpenAPI spec to MCP: How we built Xata's MCP server
Learn how we built an OpenAPI-driven MCP server using Kubb, custom code generators, and Vercel’s Next.js MCP adapter.
Author
Alexis RicoDate published
Model Context Protocol (MCP) is an emerging standard that lets AI models securely interact with tools and APIs in real time. Building an MCP server means exposing a set of “tools” (operations) that a Large Language Model (LLM) can call to perform tasks, for example, fetching data or triggering actions via your backend. Rather than hand-coding each tool, we set out to generate an MCP server from our existing OpenAPI specification, leveraging our API’s schema as a single source of truth. This OpenAPI-driven approach promises quick development and consistency, but it comes with design considerations.
On one hand, auto-generating tools directly from a REST API spec is very appealing. You’ve already documented your API thoroughly, so why not transform those endpoints into AI-accessible functions? It saves time and keeps the API and MCP definitions in sync, avoiding duplicate work. On the other hand, a naïve one-to-one mapping of every endpoint to an MCP tool can overwhelm an LLM. LLMs struggle to choose the right action from so many low-level options, leading to frequent errors or unpredictable calls, especially if several endpoints have similar purposes.
The solution lies in a balanced approach. Writing an MCP server entirely by hand for a large API would be a massive time sink, manually crafting each tool’s schema and handler is tedious and error-prone, especially when a well-defined OpenAPI spec already exists. Instead, we can autogenerate the groundwork from OpenAPI, then curate it. In practice, this means using codegen to produce a set of tool definitions and client calls, then trimming or augmenting the OpenAPI spec that generates the tools to align with real-world usage.
This post is a technical overview of how we built the Xata MCP Server, covering our switch to a new OpenAPI codegen approach, custom generation of MCP tools, and the Next.js server implementation.
We’ll walk through the journey in three parts:
- Migrating to Kubb for OpenAPI code generation. Why we replaced our previous codegen with Kubb and the benefits gained.
- Customizing Kubb with custom generators. How we generated a TypeScript API client and a suite of MCP tools from our OpenAPI spec.
- Creating the MCP Server with Next.js. Wiring it all together using Vercel’s MCP adapter, route handlers, authentication middleware, and initializing the generated tools.
Let’s explore, step by step, how the Xata MCP Server was built.
Migrating from OpenAPI Codegen to Kubb
Our first task was to revisit how we generate API client code from Xata’s API specification. Historically, we used a traditional OpenAPI code generator to produce a TypeScript client for the Xata REST API. This approach worked, but it was rigid and hard to customize. Adding new output formats or tweaking the generated code meant wrestling with scripts or post-processing the results. We wanted a more flexible, integrated solution.
Enter Kubb, a toolkit designed for TypeScript projects to generate code from OpenAPI/Swagger specs. Kubb can generate TypeScript types, API clients, React Query hooks, Zod validators, MSW handlers, and even MCP integration code, all from an OpenAPI spec.
Another key reason we chose Kubb was its plugin and generator architecture. Kubb’s code generation process is highly customizable: you can plug in predefined generators or write your own to tailor the output. This was exactly what we needed. Instead of treating our OpenAPI spec as just input for a one-size-fits-all client generator, we could leverage it to produce multiple outputs, like a low-level API client and a set of MCP tools in one go.
Kubb Configuration for Xata’s API
Setting up Kubb was straightforward. We added a kubb.config.ts
in our project, pointing it to Xata’s OpenAPI spec (which we maintain for our REST API) and declaring which plugins/generators to use. For our case, we enabled the core OpenAPI parser, TypeScript type generation, a custom client generator, and a custom MCP tool generator.
Here’s a simplified version of what our Kubb config looks like:
In this config, the @kubb/plugin-oas
plugin handles reading the OpenAPI spec and iterating over its contents. We then provide our own generators in the generators
array, one to build the API client and another to build the MCP tool definitions. The plugin-ts
is included to output TypeScript interfaces/types for our API schemas (useful for strong typing of request and response bodies). Kubb will orchestrate all these plugins in one run, parsing the spec once and feeding the data to each generator.
With the config in place, a simple command (e.g. pnpm kubb generate
) triggers the code generation.
Custom Generators: API Client and MCP Tools
Using Kubb’s extensibility, we wrote custom generators to produce two key outputs from the OpenAPI spec: (a) TypeScript API client for Xata’s REST API, and (b) MCP tool handlers that map onto those API endpoints. By generating these from the spec, we ensure consistency and save a ton of manual coding.
1. Generating a Typed API Client from OpenAPI
The first generator, clientGenerator
, focuses on creating a lightweight API client library. We wanted to keep the ergonomics of the fully type-safe API client that we have been using for 3 years. While Kubb offers a default client generator (using Axios by default), we opted to customize it to better fit our needs (for example, to use fetch, handle our auth scheme seamlessly and provide the same surfacing API that we had in our codebase).
In essence, our client generator iterates over each operation in the OpenAPI spec and emits a function that calls that endpoint. For each operation, we use its operation ID (or a modified version of it) as the function name, and generate a TypeScript function signature based on the operation’s parameters and response schema.
Because this client code is generated, it stays up-to-date with our API. If we add a new endpoint or change a parameter in the OpenAPI spec, re-running Kubb will update the client functions accordingly. This beats hand-writing HTTP calls for each new feature. It’s also less error-prone as we don’t risk typos or forgetting a header, because the generation logic consistently applies the spec’s details. In short, the OpenAPI spec remains the single source of truth, and our API client is a direct reflection of it.
2. Generating MCP Tool Definitions from OpenAPI
The second (and more interesting) generator is the mcpGenerator
. This one produces code that bridges the gap between our API and the MCP tool interface that an AI agent can use. Although Kubb has support to build a default MCP server, we decided to customize the generator to produce a initMcpTools
function that we can call from Vercel's MCP Adapter.
Tool curation and descriptions: While we generated most tools, we did make some intentional choices. For instance, we omitted certain internal or less useful endpoints from the MCP interface to avoid cluttering the AI with too many options. We also edited some tool descriptions for clarity, for example, an OpenAPI description meant for developers might be adjusted to be more instructive for an AI agent.
Using Zod for input validation: We decided to use Zod schemas for the tool input definitions. Kubb conveniently can generate Zod schemas from the OpenAPI spec (via @kubb/plugin-zod
), which we leveraged for complex data structures. Zod serves two purposes: it defines the input format for the AI (so that the AI knows what arguments to provide), and it validates any incoming request at runtime, adding a safety net. If an AI somehow provides an incorrect type, the MCP server will reject it before hitting our API.
Building the MCP Server as a Next.js App
With our client and tool code generated, the final step was to stand up the MCP server itself. We chose Next.js to implement the server, using Vercel’s @vercel/mcp-adapter
package to handle the protocol details. This choice was driven by a few factors:
- Seamless Vercel Deployment: Xata’s MCP server would be deployed on Vercel, and Next.js is a first-class citizen there. Vercel’s MCP adapter is built to drop into a Next.js API route, making deployment and scaling straightforward.
- Serverless and Fluid Compute: Next.js on Vercel can take advantage of their new “Fluid” Node.js runtime, which is well-suited for long-lived connections like SSE (Server-Sent Events) and can yield cost savings for AI workloads.
- Routing & Middleware: Next’s API route and Middleware features allowed us to handle authentication and request routing in the same way we are building the rest of our frontend applications.
Route Handling with @vercel/mcp-adapter
We created a dedicated API route for MCP under the Next.js app
directory. Following Vercel’s example, we used a dynamic route [transport]
to support both SSE and HTTP transports. In our project, we have a file like app/api/[transport]/route.ts
. This dynamic segment ([transport]
) means the route will match /api/mcp
and /api/sse
. Inside this file, we use the adapter:
Let’s break down what’s happening here. We call createMcpHandler
to create a Next.js request handler that speaks the MCP protocol. We pass in a callback that receives a server
object where we register our tools. Rather than manually listing each tool, we call our generated initMcpTools(server)
helper, which in turn invokes all the server.tool(name, desc, schema, impl)
definitions that were generated from our OpenAPI spec. This populates the MCP server with the full toolkit of Xata actions.
The MCP server is wrapped with a withAuth
wrapper function that verifies the token provided by the MCP Host. If no token is found, we return a 401 and prompt the MCP Host to start the OAuth Dynamic Client Registration against our authentication server.
We export the same handler for both GET and POST HTTP methods. According to the MCP spec (and Vercel’s adapter), the MCP client (the AI’s side) may use GET/POST for different phases of the handshake and tool calling. By exporting both, we ensure our Next.js route will handle all required requests.
In the configuration object passed to createMcpHandler
, we included a redisUrl
. This is because Server-Sent Events (SSE) transport (used by Claude and some clients) is stateful and it expects the server to maintain conversation state between calls. The Vercel adapter uses Redis if provided to store state (identified by a session ID) so that multiple function calls in a session share context.
With this route set up, our Next.js app is essentially a fully functional MCP server. When an AI agent connects to it (via an MCP client), the adapter will handle the initial handshake and advertise all the tools we registered. The AI can then invoke any of those tools, and the adapter will call into our implementations (which call Xata’s API) and return the results back to the AI.
MCP vs traditional API: It’s worth noting how this differs from a normal REST API. Instead of the client calling specific endpoints directly, the AI does a handshake to discover available tools and then calls them by name. You can think of it as a capabilities-based RPC system. For example, rather than hitting a /databases
REST endpoint, the AI asks “what tools do you have?” and the server replies with something like “I have a list_databases
tool that lists all databases in a workspace, a create_branch
tool that creates a branch on a project,” etc. The AI decides which tool to use and sends a request like “invoke list_databases
with workspace=X”. The MCP server then executes our function for list_databases
, which in turn calls the real Xata API, and the result is sent back to the AI.
Conclusion
By leaning into our OpenAPI schema, we gave our MCP server “superpowers”: the ability to evolve at the speed of our API and the confidence of strong typing and validation at every step. This hybrid approach (auto-generate then polish) let us stand up a powerful AI integration in a fraction of the time it would take to code from scratch. The MCP Server now offers a conversational interface to our platform, turning natural language prompts into real actions backed by our APIs. All of this was achieved by treating the API spec as executable knowledge, not just documentation.
As AI continues to weave into developer platforms, techniques like this will become increasingly common. They enable us to build smarter apps without reinventing the wheel for each new interface. If you’re excited by the possibilities at this intersection of AI and backend infrastructure, we invite you to give our new platform a try. Xata’s latest offering “Postgres at scale” with data branching and PII anonymization is now live. It combines a serverless Postgres experience with modern features like instant branching and data masking. Check out our announcement or request beta access to see how it can supercharge your development workflow, and feel free to experiment with our MCP server example as you explore what’s next in AI-driven development. Happy coding!
Related Posts
Xata: Postgres with data branching and PII anonymization
Relaunching Xata as "Postgres at scale". A Postgres platform with Copy-on-Write branching, data masking, and separation of storage from compute.
Are AI agents the future of observability?
After vibe coding, is vibe observability next?