Recap of Berlin Postgres Meetup on September 23rd 2025
Takeaways from the Sep 23, 2025 Berlin Postgres Meetup: how Xata Agent (open-source Postgres AI agent) helps manage PostgreSQL at scale, Q&A highlights, lessons.
Author
Divyendu SinghDate published
I recently presented "How LLMs Help Us Manage PostgreSQL at Scale" at the Berlin Postgres Meetup. It was a talk about our open source AI database agent called Xata Agent. Wanted to write my thoughts about the event and how an Postgres AI agent was received by the Postgres community in Berlin.
I would like to start by thanking the hosts and the organizers. The meetup was very well run and that was only possible because of the dedication of the organizers (Andreas Scherbaum, Oleksii Kliukin), a big thank you to them and to the event host, AWS.
Overall, the talk was well received and there were a lot of questions, making the session interactive. In comparison to the same talk a few months ago, I noticed that more folks are using AI now so the discussions were more about the workflows and limitations. Only a few months ago, the discussions were more about "can it work?".
Recap
Here is a quick summary of the meetup:
Topic: How LLMs Help Us Manage PostgreSQL at Scale (slides)
Turnout: ~50 people
Time: 50 minutes
Here are some of the questions that were asked (during and after the talk):
- Tools are static, and the only dynamic thing is the order and which tool the agent calls?
Tools are static but the parameters are dynamic, agent can choose the parameters. The explain query tool for example, the SQL to explain is chosen by the agent. - How easy it is to write tools / playbooks?
Since playbooks are saved prompts in natural language. Editing and maintaining them is easy. - How is the real workflow like? Scheduled cron or if you can use alerts to trigger agent?
Today it runs at a user defined period (in cron format) but it can react to alarms in the future. - Does it support ollama?
Yes. - Approvals + Yolo mode. What are some of the use cases?
With Xata branching (PII removed) give the agent a safe place to try things out (in yolo mode), where the eventual artifact will be a PR by the agent to improve things. This would work well in teams with performance test suites. We played with the idea of replaying production traffic. - How we can make sure that agent doesn't escape approvals?
Tool calling works in user land and we can be sure that the agent can't bypass it. - Do you have Evals already in the codebase?
Yes, we had it but we are using them less again, waiting to refactor to re-bring them in the codebase with our learnings. - What's the need for Xata agent when Claude code CLI can do custom agents + access the same logs etc with MCP?
When we wrote Xata agent, subagents didn't exist, all of the 'competing' features are relatively new. A tailor made agent is better suited for a specific task vs a general one. Compare this to using webhooks for building an on-call system vs a tailor made tool like incident.io. At the same time, programming subagents is getting easier and easier and maybe in 1-2 years, it gets so east that you can roll your own custom agent very fast (and therefore not need a dedicated Xata agent). The space is fast moving and we are evolving with it. - How useful is the agent? Like from all chats, how many are useful empirically?
Drawing parallels with coding agents 70% are very useful, 25% are in the right direction, useful but need careful eyes or the agent can deceive you, 5% are more hallucinations than useful. LLMs are getting better and better with reasoning agents, specially if threads are kept shorter. - Do you feel teams are getting dumber as more responsibility moves to agents?
Like coding agents, depends on how a team uses it and what QA process is. In the end humans are responsible for the system, so for coding agents, the PR is owned by humans still. The amount of code will increase the quality will vary on team to team basis. - Who are our target audience?
We are still evaluating that, we are fit for several use cases because of the tools that we have (pgroll, pgstream, agent), at the moment we are focusing more on dev-staging workflows. - How do we share Postgres among our users? noisy neighbor etc
Refer to "The economics of a Postgres free tier".
Community for the win
Another question "How does it handle security / SQL injection?" led to an interesting discussion that continued after the meetup. We discussed what our approach to deal with this had been and further improved it to make it more secure.
In summary, the agent was picking queries from "pg_stat_statements" directly and appending it to EXPLAIN to see its plan, this
gave a malicious or hallucinating model an opportunity to run a bad query to extract information. Over the course of improvements,
- we mitigated it by not allowing the agent to run more than a single statement (so it can't run EXPLAN ANALYZE which actually gets run)
- we ran this statement in a readonly transaction that we revert immediately.
And lastly, we created a new function that works on "queryld" and the agent no longer has the opportunity to modify the query.
Discussions like this is why we believe present our tools publicly and why we believe that open source is a great way to build software.
Conclusion
Overall, I enjoyed presenting this talk and was happy that it was received well. I would like to thank the organizers and hosts of the meetup again.
If you would like to join the discussions and contribute to Xata agent, join our community discord here.
Thank you for reading! We look forward to having you try the platform. If you'd like early access, you can join our private beta today.
Related Posts
The economics of a Postgres free tier
Let's look at the numbers behind Xata's free tier. How much does a database cost us and why are we offering them for free.
Highlights from PGConf.DE 2025
A recap of Xata's presence at PGConf.DE 2025 in Berlin - our sessions, hallway track conversations and everything inbetween.