Introduction:
Hi, I’m Dhara Parikh, a Senior Database Administrator and Gen AI Engineer exploring AI and data innovation — building intelligent, data-driven applications using modern frameworks.
With experience across platforms like SQL Server and DB2, I’m passionate about bridging traditional database management with AI-powered solutions. My PostgreSQL journey began with a pgvector use case, where I integrated vector similarity search with Azure OpenAI to enable smarter data discovery.
PostgreSQL inspires me for its blend of reliability and innovation — empowering professionals like me to turn data into intelligent solutions.
Journey in PostgreSQL
My PostgreSQL journey began when I was encouraged to explore how natural language could be used to interact with databases.
This led me to pgvector and other emerging AI capabilities in PostgreSQL — including semantic search, sentiment analysis, text embeddings, and similarity matching.
It’s been exciting to see how PostgreSQL can combine data intelligence with AI innovation, making it a powerful platform for the next generation of applications.
Can you share a pivotal moment or project in your PostgreSQL career that has been particularly meaningful to you?
A key moment in my PostgreSQL journey was building a Q&A application using pgvector to enable natural language interaction with data. It improved accuracy, made insights more accessible, and reduced infrastructure costs by around 40%. This project showed me how PostgreSQL can power AI-driven and cost-efficient solutions.
Contributions and Achievements:
One of the contributions I’m most proud of is designing a Generative AI application powered by PostgreSQL.
The goal was to make data interaction more intuitive — allowing users to query information in natural language. Using pgvector as the foundation, I stored and searched text embeddings generated through Azure OpenAI, and integrated it with the LangChain framework to orchestrate context-aware retrieval and response generation.
The solution enabled semantic search, knowledge discovery, and intelligent Q&A over structured and unstructured data — all within PostgreSQL. What made it meaningful was seeing how a traditional database could evolve into a modern vector intelligence layer, bridging enterprise data with AI in a seamless, scalable way.
This project reaffirmed my belief that PostgreSQL is not just a database — it’s a powerful AI-ready platform capable of driving the next wave of intelligent data experiences.
(II) Have you faced any challenges in your work with PostgreSQL, and how did you overcome them?
Yes, I’ve faced a few challenges while working with PostgreSQL, but each one has helped me grow and understand the system more deeply.
At times, handling large datasets with multiple users led to slow-running queries and indexing issues. To resolve this, I focused on analyzing execution plans, tuning queries, and adjusting autovacuum settings to prevent table bloat. These small but consistent improvements helped make PostgreSQL run faster and more efficiently under heavy workloads.
When I started using PostgreSQL for AI workloads like semantic search and embeddings, it was a new learning curve. Managing large vector data and similarity searches efficiently required experimentation. I worked with the pgvector extension, explored indexing techniques, and fine-tuned storage and query settings to improve search performance and accuracy.
Each challenge reinforced my belief that PostgreSQL is not just a database — it’s a flexible and evolving platform that can handle both enterprise workloads and modern AI-driven applications with the right tuning and understanding.
Community Involvement:
I engage with the PostgreSQL community by sharing my learning experiences, PoC outcomes, and use cases that combine AI and PostgreSQL, such as semantic search and vector similarity. I also follow community discussions, blogs, and updates to stay informed about new releases and best practices.
Recently, I’ve started presenting my work and insights to encourage others to explore PostgreSQL’s capabilities in AI-driven and modern data workloads.
(II) Can you share your experience with mentoring or supporting other women in the PostgreSQL ecosystem?
I’ve been actively supporting and mentoring other women in the PostgreSQL community through webinars, engineering exchanges, and knowledge-sharing sessions.
I also enjoy writing Medium articles and presenting my PostgreSQL use cases, especially where it connects with AI and ML applications.
These interactions are a great way to share learning, encourage experimentation, and help more women gain confidence in exploring modern innovations with PostgreSQL.
Insights and Advice:
My advice to women starting their careers in technology, especially in database management and PostgreSQL, is to embrace the unknown — that’s where most growth happens. Every challenge you take on will strengthen your skills and confidence.
Keep learning, experiment with emerging areas like AI/ML-driven data management, and share your journey with others.
Most importantly, believe in yourself — self-belief is what turns obstacles into opportunities and curiosity into expertise.
(II) Are there any resources (books, courses, forums) you’d recommend to someone looking to deepen their PostgreSQL knowledge?
I’ve found “PostgreSQL: Up and Running” by Regina Obe & Leo Hsu and “PostgreSQL 13 Cookbook” (O’Reilly) extremely helpful in building both foundational and advanced DBA skills. I also stay connected through the PostgreSQL community forums, which are great for learning, sharing experiences, and exploring new ideas.
Looking Forward:
Future AI developments in PostgreSQL are exciting due to the integration of asynchronous I/O in PostgreSQL 18, which offers significant performance improvements ideal for AI workloads, and enhanced indexing and query optimization features that support complex AI-driven data processing. Looking ahead, PostgreSQL is evolving to better handle AI applications by combining efficient vector search capabilities, cloud-native storage integration, and advanced hybrid query techniques. These enhancements make PostgreSQL a powerful, AI-ready database platform for scalable and intelligent applications.
(II) Do you have any upcoming projects or goals within the PostgreSQL community that you can share?
Yes, we’re currently working on an exciting project — building a database migration tool to seamlessly move Oracle workloads to PostgreSQL on Amazon RDS.
What makes it unique is the use of AI-driven automation, leveraging frameworks like LangGraph and agentic workflows to simplify schema conversion, query translation, and performance optimization.
The goal is to make enterprise migrations smarter, faster, and more resilient using PostgreSQL as the target platform.
Personal Reflection:
Being part of the PostgreSQL community means belonging to a network that fosters learning, collaboration, and innovation. I truly appreciate how the community constantly supports growth, visibly advances the technology, and values people’s contributions. Working on AI-driven Oracle-to-PostgreSQL migration tools has shown me how this collective effort turns complex ideas into practical, real-world solutions.
(II) How do you balance your professional and personal life, especially in a field that is constantly evolving?
I plan my day intentionally, guided by the principles of Atomic Habits by James Clear — focusing on small, consistent actions that drive meaningful progress. I make it a point to learn something new every day, whether it’s a PostgreSQL feature, an AI concept, or a new life skill. I dedicate time to reading, mindfulness, and fitness to stay focused and energized, and spending moments with my kid keeps me grounded and inspired.
Message to the Community:
If I could share one thought with the PostgreSQL community, especially with women — it would be this: don’t be scared to dive into the unknown. Every step outside your comfort zone opens up a new world of learning and confidence. PostgreSQL, like any technology, rewards curiosity and persistence. Keep exploring, keep experimenting, and remember — your ideas and contributions can inspire many others to take that first step too.
In PostgreSQL, table bloat can negatively impact performance by increasing storage requirements and slowing down queries. pg_squeeze is a powerful tool designed to combat this issue by automatically reorganizing tables to reclaim wasted space without requiring downtime. This talk will explore the mechanics of table bloat in PostgreSQL, introduce the capabilities of pg_squeeze, and demonstrate how it helps maintain optimal database performance by performing non-blocking vacuum operations and table maintenance. Attendees will gain insights into how to integrate and configure pg_squeeze in their environments and learn about its advantages over traditional methods like VACUUM FULL. Whether you’re managing a busy production database or looking to improve PostgreSQL performance, this session will provide practical strategies to tackle table bloat effectively.
Features of postgres 17
Our idea explores the implementation of AI-driven query optimization in PostgreSQL, addressing the limitations of traditional optimization methods in handling modern database complexities. We present an innovative approach using reinforcement learning for automated index selection and query plan optimization. Our system leverages PostgreSQL’s pg_stat_statements for collecting query metrics and employs HypoPG for index simulation, while a neural network model learns optimal indexing strategies from historical query patterns. Through comprehensive testing on various workload scenarios, we will validate the model’s ability to adapt to dynamic query patterns and complex analytical workloads. The research also examines the scalability challenges and practical considerations of implementing AI optimization in production environments.
Our findings establish a foundation for future developments in self-tuning databases while offering immediate practical benefits for PostgreSQL deployments. This work contributes to the broader evolution of database management systems, highlighting the potential of AI in creating more efficient and adaptive query optimization solutions.
| In this talk, we will explore the emerging capabilities of vector search and how PostgreSQL, with its pgvector extension, is revolutionizing data retrieval by supporting AI/ML-powered vector-based indexing and search. As machine learning models generate high-dimensional vector embeddings, the need for efficient similarity searches has become critical in applications such as recommendation systems, image recognition, and natural language processing. |
This tech talk delves into the critical world of PostgreSQL query plans, providing attendees with the knowledge and tools to understand, analyze, and optimize their database queries. We’ll begin by defining query plans and emphasizing their crucial role in database performance. We’ll explore the inner workings of the PostgreSQL planner, examining how it leverages various optimization techniques like sequential scans, index scans, joins algorithms (hash join, merge join, nested loop), and more to craft the most efficient execution strategy for a given query.
The core of the talk focuses on practical analysis. Attendees will learn how to visualize and interpret query plans using EXPLAIN and ANALYZE commands, gaining insights into execution time, data access methods, and potential bottlenecks. We’ll demonstrate how to identify common performance issues like missing indexes, inefficient joins, or suboptimal query structures by deciphering the information within a query plan.
Finally, we’ll connect the dots between PostgreSQL’s optimization techniques and the resulting query plans. By understanding how the planner weighs factors like data distribution, table statistics, and available resources, attendees will be empowered to write better queries and proactively optimize their database schema for maximum performance. This session is essential for developers and database administrators seeking to unlock the full potential of PostgreSQL and ensure their applications run smoothly and efficiently.
This talk provides an introductory overview of Artificial Intelligence (AI) and Machine Learning (ML), exploring key concepts and their application in building intelligent systems. It will highlight the essential AI/ML techniques, such as supervised and unsupervised learning, and discuss practical use cases in modern industries. The session also focuses on how PostgreSQL, with its powerful extensions like PostgresML, TimescaleDB, and PostGIS, supports the development of AI-powered applications. By leveraging PostgreSQL’s ability to handle complex datasets and integrate machine learning models, participants will learn how to build scalable, intelligent solutions directly within the database environment.
Success is a multiplier of Action, External Factors and Destiny.
Out of these three, the only controllable aspect is our action. Again, action is the result of our EQ, IQ, SQ, and WQ (Willingness Quotient) together.
We all want to be successful and keep trying to motivate ourselves with external factors. We read inspirational books, listen to great personalities, and whenever possible upgrade ourselves with more knowledge and the list goes on.
Indeed these are excellent motivators, but in this process, we forget the most important source of energy, YOU!
We read other stories to feel inspired, thinking “I am not enough!”
But, the day we start accepting ourselves, introspect, understand, and align our life purpose with our routine, we find the internal POWER. This is a continuous source of motivation and energy which we need at down moments. When we feel, lonely, stuck and seek help, our inner voice is the greatest companion.
But, how many times do we consciously think about our “Subconscious”?
“Journey to Self” is our structured coaching program where we take back focus from the outside and delve deep inside to find our inner strength. Focusing on self-acceptance and personal growth
I believe everyone has POWER within them!
Let’s be the POWERHOUSE!
Human, AI, and Personalized User Experience for DB Observability: A Composable Approach
Database users across various technical levels are frequently frustrated by the time-consuming and inefficient process of identifying the root causes of issues. This process often involves navigating multiple systems or dashboards, leading to delays in finding solutions and potential downstream impacts on operations.
The challenge is compounded by the varying levels of expertise among users. It is essential to strike the right balance between specialized and generalized experiences. Oversimplification can result in the loss of critical information, while an overwhelming amount of data can alienate certain users.
Developers and designers are constantly navigating these trade-offs to deliver optimal user experiences. The integration of AI introduces an additional layer of complexity. While AI can provide personalized experiences within databases, it is crucial to maintain user trust and transparency in the process.
The concept of personalized composable observability offers a potential solution. By combining the strengths of human expertise, information balance, and AI-driven personalization, we can create intuitive and user-friendly experiences. This approach allows users to tailor their observability tools and workflows to their specific needs and preferences.
This keynote will explore how L&D had got transferred from pre AI to post AI era and its efficiency job security?