Introduction:
Hello, my name is Gülçin Yıldırım Jelínek. I work at Xata, a modern PostgreSQL platform focused on enabling realistic staging environments with features such as copy-on-write instant branching, data anonymization and zero-downtime schema migrations.
Originally from Turkey, I have been living in Prague for more than eight years. I hold a degree in Applied Mathematics from Yildiz Technical University (Istanbul, Turkey) and a master’s degree in Computer and Systems Engineering from Tallinn Technical University (Tallinn, Estonia). I have been working professionally with databases for about 15 years with a particular focus on PostgreSQL since 2012.
I am a co-founder of Postgres Women, a former member of the PostgreSQL Europe Board of Directors and the main organizer of the Prague PostgreSQL Meetup, which I have been running since 2017. I am also a co-founder and general coordinator of Kadin Yazilimci (Women Developers), a volunteer-based organization dedicated to close the gender gap in IT in Turkey. As part of this initiative, we organize an annual conference called Diva: Dive into AI with the fourth edition planned for 2026 in Istanbul.
I was recently recognized as a PostgreSQL Contributor and am committed to supporting the growth of the Postgres project while contributing to its longevity and success.
Journey in PostgreSQL
I began my career working with databases such as Oracle and DB2, first in the IT department of the largest private hospital in Turkey and later at the country’s largest commercial bank. In 2012, I received an offer from a startup using PostgreSQL to become their DBA, which marked my transition to Postgres, a decision I have never looked back on since.
Can you share a pivotal moment or project in your PostgreSQL career that has been particularly meaningful to you?
I started working at 2ndQuadrant in 2017, a company founded by the late Simon Riggs. There, I had the opportunity to manage customer databases at scale, develop automation for PostgreSQL administration, and lead the effort to build a Database-as-a-Service platform. At that time, this was ahead of its era, only a handful of companies were offering Postgres as a Service. As part of this platform, we delivered Postgres Distributed (formerly known as BDR), supporting not only physical replicas but also multi-master, always-on architectures with integrated backup and restore, elastic scaling, monitoring and optimized PostgreSQL configuration.
I also feel incredibly fortunate to have worked alongside major PostgreSQL contributors at 2ndQuadrant and later at EDB. It was a unique opportunity to learn directly from the people who helped build core PostgreSQL features such as hot standby, streaming replication, logical decoding and extensions like pglogical as well as critical ecosystem projects including PgBouncer.
Contributions and Achievements:
When I first began speaking at PostgreSQL conferences, I was often the only woman present. On rare occasions, there might be one other woman on the schedule, a few in the audience or involved in the organization. I have consistently advocated for inclusivity and I am proud to say that the situation has improved significantly since then. Today, we see more women participating in the Postgres community across Europe and I hope it feels less intimidating for newcomers. I like to think I have had a small impact on this change and I continue to advocate for a stronger, more inclusive community. I consider myself a natural community builder. I help organize conferences, run meetups and speak regularly about Postgres and I am proud of these efforts.
(II) Have you faced any challenges in your work with PostgreSQL, and how did you overcome them?
In the beginning, there were some obstacles, mainly due to language barriers. When working in Turkey and contributing to the Postgres mailing lists, we often spent a great deal of time perfecting our English, trying to write messages that were as open, descriptive and to the point as possible. The lack of PostgreSQL resources in Turkish was also a significant challenge. Today, things are much easier: with the help of LLM-based tools, writing clear emails and translating complex terms into one’s native language has become far more accessible.
Community Involvement:
I engage with the PostgreSQL community in multiple ways: by submitting talk proposals, speaking at and attending conferences and helping to organize conferences and meetups. I also contribute through blog posts and ongoing advocacy for Postgres. In addition, I work on open-source tools such as pgroll and pgstream. Being part of the ecosystem whether through direct feedback after talks or issues raised by users of these tools keeps me closely connected to the community and aware of its needs.
(II) Can you share your experience with mentoring or supporting other women in the PostgreSQL ecosystem?
I am always open to mentoring, and running the Kadin Yazilimci organization provides me with a platform to support other women in tech. My main focus has always been on young students and over the years some of them have even become my colleagues. Seeing this growth is incredibly rewarding and continues to motivate my volunteer work.
Within Postgres Women, we have an unofficial Telegram group where we share news and opportunities with one another. I also make use of funds provided by the companies I have worked with to host gatherings such as breakfasts and dinners for women in Postgres, helping to foster in-person connections whenever possible. I am grateful to the sponsors who make these meetings possible.
Insights and Advice:
You are not late, you belong here, and you deserve your place in Postgres just like everyone else.
(II) Are there any resources (books, courses, forums) you’d recommend to someone looking to deepen their PostgreSQL knowledge?
I recommend following Planet PostgreSQL, a blog aggregator featuring posts from community members which is my go-to source for staying up to date. I am also subscribed to the Postgres Weekly newsletter and track PostgreSQL (and Postgres) mentions on Hacker News and Reddit. While there are many excellent books on Postgres, I would like to highlight a recent one, Decode PostgreSQL: Understanding the World’s Most Powerful Open-Source Database Without Writing Code by Ellyne Phneah.
Looking Forward:
I am interested in how PostgreSQL can support different workloads. For example, how we can make Postgres the default database for AI and agents by improving vector support (currently not supported natively). I’m also curious about how Postgres can go beyond its traditional OLTP role and serve analytics use cases effectively. Another area that excites me is running Postgres on Kubernetes. At Xata, we are using the CNPG operator in our new platform and I often think about ways to make Postgres run even more smoothly in Kubernetes environments. Finally, I believe extension management is still more complex than it should be and there is room for improvements that would make the experience more seamless and comfortable for users.
(II) Do you have any upcoming projects or goals within the PostgreSQL community that you can share?
I am co-organizing the PostgreSQL on Kubernetes Summit and the PostgreSQL & AI Summit at PGConf.EU, both taking place on the first day of the conference, October 21. I am excited to bring together people interested in focused discussions on these important developments in Postgres, to exchange ideas, learn from one another and ultimately share those insights with the wider PostgreSQL project.
Personal Reflection:
It may sound like a cliché, but over the years the Postgres community has come to feel like a second family. I have been fortunate to make great friends, watch our children grow up and see conferences double or even triple in size as PostgreSQL’s popularity has soared. I hope to always remain a part of this community.
(II) How do you balance your professional and personal life, especially in a field that is constantly evolving?
I try to keep my main focus on the projects I work with while also staying informed about developments in the field. I follow discussions that interest me on the mailing lists and given the nature of my work, I also keep track of the features offered by other Postgres providers. We are very much a “Postgres family,” so the Postgres often finds its way into personal conversations with my husband, who is also a major Postgres contributor. Our kid was able to pronounce “Postgres” correctly before she turned three 🙂
Message to the Community:
There are many factors that contribute to a community’s health and longevity. For Postgres, we need diverse voices and backgrounds that bring diverse ideas to the project. PostgreSQL is a proven technology with countless opportunities and expertise in Postgres will never go out of fashion. The best time to start working with Postgres is today!
In PostgreSQL, table bloat can negatively impact performance by increasing storage requirements and slowing down queries. pg_squeeze is a powerful tool designed to combat this issue by automatically reorganizing tables to reclaim wasted space without requiring downtime. This talk will explore the mechanics of table bloat in PostgreSQL, introduce the capabilities of pg_squeeze, and demonstrate how it helps maintain optimal database performance by performing non-blocking vacuum operations and table maintenance. Attendees will gain insights into how to integrate and configure pg_squeeze in their environments and learn about its advantages over traditional methods like VACUUM FULL. Whether you’re managing a busy production database or looking to improve PostgreSQL performance, this session will provide practical strategies to tackle table bloat effectively.
Features of postgres 17
Our idea explores the implementation of AI-driven query optimization in PostgreSQL, addressing the limitations of traditional optimization methods in handling modern database complexities. We present an innovative approach using reinforcement learning for automated index selection and query plan optimization. Our system leverages PostgreSQL’s pg_stat_statements for collecting query metrics and employs HypoPG for index simulation, while a neural network model learns optimal indexing strategies from historical query patterns. Through comprehensive testing on various workload scenarios, we will validate the model’s ability to adapt to dynamic query patterns and complex analytical workloads. The research also examines the scalability challenges and practical considerations of implementing AI optimization in production environments.
Our findings establish a foundation for future developments in self-tuning databases while offering immediate practical benefits for PostgreSQL deployments. This work contributes to the broader evolution of database management systems, highlighting the potential of AI in creating more efficient and adaptive query optimization solutions.
In this talk, we will explore the emerging capabilities of vector search and how PostgreSQL, with its pgvector extension, is revolutionizing data retrieval by supporting AI/ML-powered vector-based indexing and search. As machine learning models generate high-dimensional vector embeddings, the need for efficient similarity searches has become critical in applications such as recommendation systems, image recognition, and natural language processing. |
This tech talk delves into the critical world of PostgreSQL query plans, providing attendees with the knowledge and tools to understand, analyze, and optimize their database queries. We’ll begin by defining query plans and emphasizing their crucial role in database performance. We’ll explore the inner workings of the PostgreSQL planner, examining how it leverages various optimization techniques like sequential scans, index scans, joins algorithms (hash join, merge join, nested loop), and more to craft the most efficient execution strategy for a given query.
The core of the talk focuses on practical analysis. Attendees will learn how to visualize and interpret query plans using EXPLAIN and ANALYZE commands, gaining insights into execution time, data access methods, and potential bottlenecks. We’ll demonstrate how to identify common performance issues like missing indexes, inefficient joins, or suboptimal query structures by deciphering the information within a query plan.
Finally, we’ll connect the dots between PostgreSQL’s optimization techniques and the resulting query plans. By understanding how the planner weighs factors like data distribution, table statistics, and available resources, attendees will be empowered to write better queries and proactively optimize their database schema for maximum performance. This session is essential for developers and database administrators seeking to unlock the full potential of PostgreSQL and ensure their applications run smoothly and efficiently.
This talk provides an introductory overview of Artificial Intelligence (AI) and Machine Learning (ML), exploring key concepts and their application in building intelligent systems. It will highlight the essential AI/ML techniques, such as supervised and unsupervised learning, and discuss practical use cases in modern industries. The session also focuses on how PostgreSQL, with its powerful extensions like PostgresML, TimescaleDB, and PostGIS, supports the development of AI-powered applications. By leveraging PostgreSQL’s ability to handle complex datasets and integrate machine learning models, participants will learn how to build scalable, intelligent solutions directly within the database environment.
Success is a multiplier of Action, External Factors and Destiny.
Out of these three, the only controllable aspect is our action. Again, action is the result of our EQ, IQ, SQ, and WQ (Willingness Quotient) together.
We all want to be successful and keep trying to motivate ourselves with external factors. We read inspirational books, listen to great personalities, and whenever possible upgrade ourselves with more knowledge and the list goes on.
Indeed these are excellent motivators, but in this process, we forget the most important source of energy, YOU!
We read other stories to feel inspired, thinking “I am not enough!”
But, the day we start accepting ourselves, introspect, understand, and align our life purpose with our routine, we find the internal POWER. This is a continuous source of motivation and energy which we need at down moments. When we feel, lonely, stuck and seek help, our inner voice is the greatest companion.
But, how many times do we consciously think about our “Subconscious”?
“Journey to Self” is our structured coaching program where we take back focus from the outside and delve deep inside to find our inner strength. Focusing on self-acceptance and personal growth
I believe everyone has POWER within them!
Let’s be the POWERHOUSE!
Human, AI, and Personalized User Experience for DB Observability: A Composable Approach
Database users across various technical levels are frequently frustrated by the time-consuming and inefficient process of identifying the root causes of issues. This process often involves navigating multiple systems or dashboards, leading to delays in finding solutions and potential downstream impacts on operations.
The challenge is compounded by the varying levels of expertise among users. It is essential to strike the right balance between specialized and generalized experiences. Oversimplification can result in the loss of critical information, while an overwhelming amount of data can alienate certain users.
Developers and designers are constantly navigating these trade-offs to deliver optimal user experiences. The integration of AI introduces an additional layer of complexity. While AI can provide personalized experiences within databases, it is crucial to maintain user trust and transparency in the process.
The concept of personalized composable observability offers a potential solution. By combining the strengths of human expertise, information balance, and AI-driven personalization, we can create intuitive and user-friendly experiences. This approach allows users to tailor their observability tools and workflows to their specific needs and preferences.
This keynote will explore how L&D had got transferred from pre AI to post AI era and its efficiency job security?