Summary
Disclaimer: This summary has been generated by AI. It is experimental, and feedback is welcomed. Please reach out to info@qcon.ai with any comments or concerns.
The presentation discusses the role of Postgres as a foundational platform for enterprise AI applications, emphasizing its robust and scalable infrastructure. Here is a structured summary:
Introduction
- Postgres is positioned as a cornerstone for enterprise AI, supporting Retrieval Augmented Generation (RAG) systems with reliability and scalability.
- Focus is placed on using Postgres for AI-driven context management instead of specialized vector databases.
Advantages of Using Postgres for AI
- Transactional Guarantees: Ensures reliable data retrieval and storage.
- Data Integration: Seamless integration with various data types and formats.
- Operational Simplicity: Postgres offers simple operational tooling.
- Data Governance: Robust frameworks for managing data securely.
Key Implementation Strategies
- Implementing data modeling strategies suitable for large datasets.
- Performance tuning techniques to enhance efficiency.
- Ensuring data synchronization across systems.
- Navigating RAG integration within enterprise data ecosystems.
Practical Use Cases
- AI-assisted project management tools are enhanced using relational queries and data integration capabilities of Postgres.
- Utilizing modern SQL to achieve complex operations traditionally attributed to graph databases.
Conclusion
- Postgres stands out for its reliability and extensibility, offering a mature platform for developing sophisticated AI applications.
- The presentation concludes with a call to explore Postgres for streamlined AI architectures focused on reducing operational overhead and fostering data-driven decision making.
This summary captures essential points from Gwen Shapira's presentation, demonstrating the effectiveness of Postgres in supporting AI-driven enterprise applications.
This is the end of the AI-generated content.
Retrieval Augmented Generation (RAG) is now a fundamental pillar of enterprise AI, moving beyond initial adoption to production-grade applications. While the spotlight often shines on specialized vector databases, this session will present a compelling, practical argument for Postgres as the robust, scalable, and often superior foundation for production-grade context engineering.
As experienced engineers, we understand the value of battle-tested infrastructure. This talk will demonstrate how Postgres, enhanced by extensions like pgvector, can efficiently handle intelligent search and manage the rich, interconnected data, like user history and contextual information, essential for sophisticated AI applications. We'll dive deep into the practical advantages of a relational and ACID approach: transactional guarantees, data integration capabilities, simplified operational tooling, and robust data governance.
You'll walk away with practical tips for building and fine-tuning a complete RAG system on Postgres! We'll cover key implementation aspects, including data modeling strategies, performance tuning for large datasets, ensuring data synchronization, and navigating the nuances of integrating RAG within existing enterprise data ecosystems. Join this session to discover how to streamline your AI architecture, reduce operational overhead, and ground AI in your business facts with PostgreSQL.
Interview:
What is your session about, and why is it important for senior software developers?
The session is about the use of Postgres database in building context for an AI agent or in retrieving data for RAG. Postgres is such a versatile database, and its use can evolve together with your understanding of the project needs. You can use it for relational data, documents, json, vector search, graphs, geo-spatial and a lot more. And as a database that was used in production for over 35 years, it is rock solid.
I saw that when building with specialized DBs, you end up with a bunch of them, and you need to keep them all in sync - it ends up as a really complex system!
I want every developer to understand all the capabilities of this amazing database, and how to apply them when building agents and/or RAG. It will likely make their lives a lot simpler.
Why is it critical for software leaders to focus on this topic right now, as we head into 2026?
As you said, its 2026. You *will* be asked to build either an AI chatbot or an agent. As software leader, its the job to build a great architecture for this - something that is both robust and flexible. My session will present an option that you should be aware of and consider.
What are the common challenges developers and architects face in this area?
This area is not only new, it changes on the daily. You start designing a system and the next day its "RAG is dead. Actually no, you should use graph RAG. No, we are doing context engineering today. Actually, there are bigger context windows do we still need RAG?".
Meanwhile, you still need to ship something.
And with models being non-deterministic and sometimes changing from day to day, it is really hard to know if your project is having challenges because you didn't do things right (the infamous "prompt better") or maybe this is the state of the art right now.
What's one thing you hope attendees will implement immediately after your talk?
I love this question. It is so important to learn from actually trying the ideas in practice.
I want the attendees to design an agent that uses SQL (rather than a bunch of APIs or tools) for its context. There's something really empowering when you experience how much you can achieve with one simple database.
Maybe after my talk the attendees will be able to pick one dataset (tickets, logs, customer notes), load it to Postgres, and have an agent use it.
What makes QCon stand out as a conference for senior software professionals?
The attendees are next level. I learned more in the hallway track of QCon that I do in most other events. Everyone is just so experienced and has such interesting ideas to share.
What was one interesting thing that you learned from a previous QCon?
I especially remember a talk from Lyft about how they implemented GRPC across their org. I learned a lot about GRPC, but more importantly - I learned how to drive adoption of a new technology across many teams in a big organization. I think I owe a promotion or two to that talk alone. And this mix of the technical and the social is so unique to QCon.
Speaker
Gwen Shapira
Co-Founder and CPO @Nile, Previously Engineering Leader @Confluent, PMC Member @Kafka, & Committer Apache Sqoop
Gwen is a co-founder and CPO of Nile (thenile.dev). She has 20+ years of experience working with code and customers to build reliable and scalable data architectures - most recently as the head of Cloud Native Kafka engineering org at Confluent. Gwen is a committer to Apache Kafka, author of “Kafka - the Definitive Guide” and "Hadoop Application Architectures". You can find her speaking at tech conferences or talking data at the SaaS Developer Community.