1) Let’s start with you!
Tell us a bit about yourself – your background, current role, and what excites you most in the world of tech.
I’m a Computer Science graduate who began her journey in tech in 2020 as a Cloud Support Associate at Amazon, specializing in databases. That role marked my first deep dive into the world of PostgreSQL—and it was love at first SELECT!
Over the years, I transitioned into roles with increasing scope and responsibility—first as a Cloud Support Engineer, and later as a Cloud Support Database Engineer (DBE). During this time, I earned recognition as an RDS Core and RDS PostgreSQL SME, which strengthened my confidence in solving real-world customer issues, diving deep into Postgres internals, and building scalable, cloud-native database solutions.
Today, I work as a Database Engineer in Amazon Fulfillment Technologies, where I manage thousands of Amazon’s production-grade databases running on Amazon RDS and Aurora PostgreSQL, along with several hundred instances of RDS and Aurora MySQL. It’s a high-scale, high-impact environment—and I genuinely love being in the thick of it all.
What excites me most about tech is its never-ending learning curve. Every incident, every architecture review, every deep dive is a fresh opportunity to grow. Whether it’s tuning a complex query plan or understanding the nuances of PostgreSQL internals, I’m constantly energized by the challenge and the ability to make systems more resilient and performant.
Being part of a community like Postgres Women India reminds me just how powerful mentorship and shared learning can be. It’s inspiring to connect with like-minded professionals and contribute to building a more inclusive, diverse tech community.
Outside of work, I love staying connected with open-source and cloud communities. I’m also a passionate Kuchipudi dancer, and whenever I get a chance, I travel to soak in the beauty of nature—it keeps me grounded and recharged.
2) Why PostgreSQL?
What inspired you to explore or switch to PostgreSQL?
PostgreSQL has been part of my story from day one. I began exploring databases with Postgres on my very first day at Amazon, and it quickly became my foundation for navigating complex data systems.
What started as baby steps—understanding the architecture, setting up my first instance, and learning how to query and monitor—soon evolved into a deep passion. I found myself replicating intricate customer issues, diving into logs, performance tuning, and exploring how Postgres behaves under real-world production workloads.
Guided by Amazon’s Leadership Principle of “Learn and Be Curious,” I pushed myself further and earned the title of RDS PostgreSQL Subject Matter Expert (SME). This role gave me the opportunity to handle critical escalations, connect directly with customers on specialized performance and feature requests, and mentor others on their own SME journeys. Each challenge pushed me deeper—from understanding VACUUM internals and index bloat to exploring logical replication, query planner behavior, and PostgreSQL extensions in depth.
The more I worked with PostgreSQL, the more I admired its thoughtful design, powerful extensibility, and the elegance with which it solves complex use cases. Supporting AWS customers through real-world production issues didn’t just sharpen my technical skills—it created a lasting emotional connection with the technology.
What truly sealed the deal was the PostgreSQL community—supportive, inclusive, and relentlessly innovative. PostgreSQL didn’t just shape my technical foundation; it shaped my identity as a database engineer.
It didn’t just spark my curiosity—it continues to fuel it every single day.
3) What are you working on with PostgreSQL right now?
Share the cool stuff you’re building, learning, or solving using PostgreSQL.
Lately, I’ve been exploring how PostgreSQL can be extended to support modern data applications, particularly those involving generative AI and intelligent query processing. I recently delivered a session at the Women in Data Tech Summit 2025 on “Generative AI-Driven Query Optimization for PostgreSQL”, focused on helping DBEs tackle the challenges of complex or ad-hoc queries, especially in dynamic, high-throughput workloads. You can check out the session summary here: LinkedIn post
Alongside that, I’m continuing to upskill with PostgreSQL 17 and 18, exploring the latest enhancements, features, and debugging improvements. Staying updated with each major release helps me bring more modern, efficient solutions to the teams I support—and ensures we’re making the most of what PostgreSQL has to offer in production environments.
4) What’s been your biggest learning or challenge on this journey?
A lesson, mistake, or an aha moment, we’d love to hear about it!
One of my biggest “aha” moments came when I realized just how critical PostgreSQL parameter tuning is in production environments. This became evident during a high-priority customer escalation, where their workload was facing significant performance degradation under load.
The system showed no immediate red flags—no locks, no spikes, no errors—yet queries were consistently lagging. After a deeper investigation, we discovered that several PostgreSQL parameters needed to be fine-tuned based on the customer’s data volume and workload patterns. The default or previously set values simply weren’t aligned with how their application operated at scale.
It was a powerful reminder that PostgreSQL performance isn’t just about writing efficient queries—it’s also about tuning the engine to fit the workload. That experience pushed me to explore Postgres internals more deeply and strengthen my skills in real-world performance diagnostics—tools and insights I now rely on every day.
5) Your wisdom to rookies like yourself?
What’s one tip or piece of advice you’d give to someone just starting out with PostgreSQL?
PostgreSQL can seem overwhelming at first, but every small thing you learn—be it running your first query, understanding EXPLAIN, or tweaking a config—adds to your foundation. Don’t focus on knowing it all at once. Instead, focus on solving real problems, even if it means just debugging a slow query or helping automate a task. Those hands-on experiences are where the real learning happens.
One tip that helped me early on: make friends with the documentation. The official PostgreSQL docs are incredibly well-written, and interdb.jp offers some of the best visual breakdowns of Postgres internals I’ve come across—it’s a goldmine for anyone curious about how things actually work under the hood.
And finally—ask questions, stay curious, and don’t underestimate the power of consistency. PostgreSQL has a vibrant and supportive community. Your journey doesn’t have to be perfect—it just has to be yours.
6) Finally, describe your PostgreSQL journey in one word.
Yep, just one!
Empowering!
7) Who or what has influenced your PostgreSQL learning the most?
A mentor, a community, a course, a project, tell us what or who helped you grow.
My PostgreSQL journey has been deeply shaped by the people and challenges around me.
It began with the incredible mentors I had at AWS, who not only guided me through the basics but also shared their real-world experiences that sparked my curiosity. Their enthusiasm was contagious—they helped me see PostgreSQL not just as a database, but as a powerful ecosystem worth exploring deeply.
As I grew into my role, it was the breadth and depth of challenges from AWS’s diverse customer base that pushed me further. Each complex issue—from performance bottlenecks to advanced tuning, replication quirks, and edge-case behaviors—became an opportunity to dive deeper and expand my understanding of how PostgreSQL works in high-scale, mission-critical environments.
Together, these experiences taught me that learning PostgreSQL isn’t just about reading documentation—it’s about collaborating, problem-solving, and continuously pushing your own boundaries.
8) What’s one PostgreSQL concept or feature you finally understood and felt proud of?
That lightbulb moment when something clicked, we all have one!
For me, it’s logical replication.
When I first encountered it, I found the concept tricky—especially understanding the interplay between publications, subscriptions, and how changes flow across nodes. Despite reading the documentation, it didn’t fully click right away.
But I stuck with it—diving into multiple articles, experimenting hands-on, and troubleshooting different replication scenarios in test environments. Over time, the pieces started falling into place—from decoding slots and monitoring replication lag to realizing how DDL changes need to be handled manually.
While I’m still learning and exploring the deeper aspects, reaching a point where I could confidently explain the fundamentals and start applying it to real-world use cases was something I felt genuinely proud of.
That’s the beauty of PostgreSQL—every complex concept has a “click” moment if you stay curious and keep at it.
Our idea explores the implementation of AI-driven query optimization in PostgreSQL, addressing the limitations of traditional optimization methods in handling modern database complexities. We present an innovative approach using reinforcement learning for automated index selection and query plan optimization. Our system leverages PostgreSQL’s pg_stat_statements for collecting query metrics and employs HypoPG for index simulation, while a neural network model learns optimal indexing strategies from historical query patterns. Through comprehensive testing on various workload scenarios, we will validate the model’s ability to adapt to dynamic query patterns and complex analytical workloads. The research also examines the scalability challenges and practical considerations of implementing AI optimization in production environments.
Our findings establish a foundation for future developments in self-tuning databases while offering immediate practical benefits for PostgreSQL deployments. This work contributes to the broader evolution of database management systems, highlighting the potential of AI in creating more efficient and adaptive query optimization solutions.
This talk provides an introductory overview of Artificial Intelligence (AI) and Machine Learning (ML), exploring key concepts and their application in building intelligent systems. It will highlight the essential AI/ML techniques, such as supervised and unsupervised learning, and discuss practical use cases in modern industries. The session also focuses on how PostgreSQL, with its powerful extensions like PostgresML, TimescaleDB, and PostGIS, supports the development of AI-powered applications. By leveraging PostgreSQL’s ability to handle complex datasets and integrate machine learning models, participants will learn how to build scalable, intelligent solutions directly within the database environment.
Success is a multiplier of Action, External Factors and Destiny.
Out of these three, the only controllable aspect is our action. Again, action is the result of our EQ, IQ, SQ, and WQ (Willingness Quotient) together.
We all want to be successful and keep trying to motivate ourselves with external factors. We read inspirational books, listen to great personalities, and whenever possible upgrade ourselves with more knowledge and the list goes on.
Indeed these are excellent motivators, but in this process, we forget the most important source of energy, YOU!
We read other stories to feel inspired, thinking “I am not enough!”
But, the day we start accepting ourselves, introspect, understand, and align our life purpose with our routine, we find the internal POWER. This is a continuous source of motivation and energy which we need at down moments. When we feel, lonely, stuck and seek help, our inner voice is the greatest companion.
But, how many times do we consciously think about our “Subconscious”?
“Journey to Self” is our structured coaching program where we take back focus from the outside and delve deep inside to find our inner strength. Focusing on self-acceptance and personal growth
I believe everyone has POWER within them!
Let’s be the POWERHOUSE!
Human, AI, and Personalized User Experience for DB Observability: A Composable Approach
Database users across various technical levels are frequently frustrated by the time-consuming and inefficient process of identifying the root causes of issues. This process often involves navigating multiple systems or dashboards, leading to delays in finding solutions and potential downstream impacts on operations.
The challenge is compounded by the varying levels of expertise among users. It is essential to strike the right balance between specialized and generalized experiences. Oversimplification can result in the loss of critical information, while an overwhelming amount of data can alienate certain users.
Developers and designers are constantly navigating these trade-offs to deliver optimal user experiences. The integration of AI introduces an additional layer of complexity. While AI can provide personalized experiences within databases, it is crucial to maintain user trust and transparency in the process.
The concept of personalized composable observability offers a potential solution. By combining the strengths of human expertise, information balance, and AI-driven personalization, we can create intuitive and user-friendly experiences. This approach allows users to tailor their observability tools and workflows to their specific needs and preferences.