1) Let’s start with you!
Tell us a bit about yourself – your background, current role, and what excites you most in the world of tech.
I’m Sweta Vooda, a recent Master’s graduate in Computer Science from Georgia Tech, where I specialized in Computing Systems. My interests lie at the intersection of Databases, Distributed Systems and Operating Systems. During my time at Georgia Tech, I explored PostgreSQL internals through research projects focused on extension development and vector search. I currently work at Sigma Computing on the Query Lifecycle team, contributing to the backend systems that power our cloud-based BI platform. What excites me most is building foundational systems that are extensible, performant, and impactful systems that enable others to innovate on top of them.
2) Why PostgreSQL? What inspired you to explore or switch to PostgreSQL?
I started learning and working more with PostgreSQL during my research at Georgia Tech. What drew me in was the high extensibility and open-source nature of Postgres which makes it the perfect environment to learn, experiment, and build extensions. Whether it was developing custom extensions or diving into the internals, Postgres gave me the flexibility to explore ideas deeply. Its open nature, vibrant ecosystem, and real-world applicability made it an ideal choice, and the supportive community made the journey even more rewarding.
3) What are you working on with PostgreSQL right now?
Share the cool stuff you’re building, learning, or solving using PostgreSQL.
I worked on building pgvector-remote, an extension that transforms PostgreSQL into a control plane for vector search by coordinating with external vector engines. Instead of storing embeddings inside Postgres, the extension intercepts inserts and queries using custom index access methods, and asynchronously pushes vector operations to the external system. It preserves SQL simplicity while enabling high-performance vector search. This project served as a strong proof of concept for how Postgres can go beyond traditional storage and be extended to interact with external systems to offload compute-intensive operations.
4) What’s been your biggest learning or challenge on this journey?
A lesson, mistake, or an aha moment, we’d love to hear about it!
Understanding how to safely buffer data in memory and defer flushing to external systems in an extension was a major challenge. I had to deeply learn Postgres’s WAL and use RegisterXactCallback to ensure that external writes occurred only after a successful commit preserving ACID guarantees without blocking performance.
5) Your wisdom to rookies like yourself?
What’s one tip or piece of advice you’d give to someone just starting out with PostgreSQL?
My biggest advice: don’t wait to have a “big” project to share, however small the contribution, it’s never insignificant. Just put your work out there. I published an article about a simple extension I was building, and to my surprise, experienced members of the PostgreSQL community read it, reached out with feedback, and even encouraged me to present at a conference. That one step led to mentorship, connections, and a huge confidence boost. This community truly supports and uplifts rookies who are willing to learn, contribute, show up, and share.
6) Finally, describe your PostgreSQL journey in one word.
Yep, just one!
Inspiring
7) Who or what has influenced your PostgreSQL learning the most?
A mentor, a community, a course, a project, tell us what or who helped you grow.
My journey began with the guidance of my professor at Georgia Tech, Dr. Joy Arulraj, whose mentorship played a key role in shaping my interest in database systems and Postgres internals. From there, the PostgreSQL community has been the most influential force in my learning. I’m constantly inspired by how welcoming, encouraging, and generous the community is with its time and knowledge.
Many inspiring and experienced members from the Postgres Comminity like Mehboob Alam, who introduced me to the ecosystem, Joshua Drake, who gave me the opportunity to attend my first in-person PostgreSQL event, and Yuri, who encouraged me to speak at Postgres Extension Day, all made a huge impact. Beyond individuals, the community’s collective efforts to support newcomers, provide thoughtful feedback, and actively create space for early-career contributors gave me the confidence to share my work, learn deeply, and grow both technically and personally.
8) What’s one PostgreSQL concept or feature you finally understood and felt proud of?
That lightbulb moment when something clicked, we all have one!
Understanding MVCC finally clicked for me when I had to implement safe cleanup of old data. Learning Postgres internals like RegisterXactCallback specifically how xmin and xmax control row visibility and how VACUUM uses that to reclaim space made me appreciate how PostgreSQL handles concurrency without locking and still maintains efficiency.
Our idea explores the implementation of AI-driven query optimization in PostgreSQL, addressing the limitations of traditional optimization methods in handling modern database complexities. We present an innovative approach using reinforcement learning for automated index selection and query plan optimization. Our system leverages PostgreSQL’s pg_stat_statements for collecting query metrics and employs HypoPG for index simulation, while a neural network model learns optimal indexing strategies from historical query patterns. Through comprehensive testing on various workload scenarios, we will validate the model’s ability to adapt to dynamic query patterns and complex analytical workloads. The research also examines the scalability challenges and practical considerations of implementing AI optimization in production environments.
Our findings establish a foundation for future developments in self-tuning databases while offering immediate practical benefits for PostgreSQL deployments. This work contributes to the broader evolution of database management systems, highlighting the potential of AI in creating more efficient and adaptive query optimization solutions.
This talk provides an introductory overview of Artificial Intelligence (AI) and Machine Learning (ML), exploring key concepts and their application in building intelligent systems. It will highlight the essential AI/ML techniques, such as supervised and unsupervised learning, and discuss practical use cases in modern industries. The session also focuses on how PostgreSQL, with its powerful extensions like PostgresML, TimescaleDB, and PostGIS, supports the development of AI-powered applications. By leveraging PostgreSQL’s ability to handle complex datasets and integrate machine learning models, participants will learn how to build scalable, intelligent solutions directly within the database environment.
Success is a multiplier of Action, External Factors and Destiny.
Out of these three, the only controllable aspect is our action. Again, action is the result of our EQ, IQ, SQ, and WQ (Willingness Quotient) together.
We all want to be successful and keep trying to motivate ourselves with external factors. We read inspirational books, listen to great personalities, and whenever possible upgrade ourselves with more knowledge and the list goes on.
Indeed these are excellent motivators, but in this process, we forget the most important source of energy, YOU!
We read other stories to feel inspired, thinking “I am not enough!”
But, the day we start accepting ourselves, introspect, understand, and align our life purpose with our routine, we find the internal POWER. This is a continuous source of motivation and energy which we need at down moments. When we feel, lonely, stuck and seek help, our inner voice is the greatest companion.
But, how many times do we consciously think about our “Subconscious”?
“Journey to Self” is our structured coaching program where we take back focus from the outside and delve deep inside to find our inner strength. Focusing on self-acceptance and personal growth
I believe everyone has POWER within them!
Let’s be the POWERHOUSE!
Human, AI, and Personalized User Experience for DB Observability: A Composable Approach
Database users across various technical levels are frequently frustrated by the time-consuming and inefficient process of identifying the root causes of issues. This process often involves navigating multiple systems or dashboards, leading to delays in finding solutions and potential downstream impacts on operations.
The challenge is compounded by the varying levels of expertise among users. It is essential to strike the right balance between specialized and generalized experiences. Oversimplification can result in the loss of critical information, while an overwhelming amount of data can alienate certain users.
Developers and designers are constantly navigating these trade-offs to deliver optimal user experiences. The integration of AI introduces an additional layer of complexity. While AI can provide personalized experiences within databases, it is crucial to maintain user trust and transparency in the process.
The concept of personalized composable observability offers a potential solution. By combining the strengths of human expertise, information balance, and AI-driven personalization, we can create intuitive and user-friendly experiences. This approach allows users to tailor their observability tools and workflows to their specific needs and preferences.