Alfredo Rodriguez / Wolfgres
This is a case of study about the multi-master using logical replication and pgEdge/spock extension. The principal objective of this case of study is to probe this extension and try to answer the next questions about the multi-master replication in a real situation as would be a high transactional financial application.
We design applications and stress clusters with millions of transactions to probe the pgEdge/spock extension.
Álvaro Herrera / EDB
An important performance optimization was added to Postgres 17, which changed the way SLRUs are managed and configured. But what are exactly these SLRUs? What do they store? What exactly was changed in Postgres 17? How do we now configure them? This talk goes over these questions in detail.
Ants Aasma / Cybertec
As much as we all love PostgreSQL explain command for diagnosing performance issues, there are some things that it doesn't reveal to us. In this talk I will take a look at some of the hidden performance gremlins and how to spot them in your plans. Things like TOAST lookups, visibility checks, hint bits and expression evaluation time can sometimes turn up in surprising places in your execution plans with little to no indication that this is happening. Knowing how to look for these problems is already a large step towards solving them.
Bilge Ince / EDB
Bilge Ince will examine the smooth incorporation of AI into PostgreSQL. Learn how enhancements driven by AI, such as the aidb extension and pgvector, are transforming Postgres from a conventional database into an intelligent system. This presentation will offer practical tips for utilizing AI within your data architecture with Postgres. Additionally, you’ll enhance your understanding of the function of LLMs and their relationship with vector data in Postgres. As AI continues to unveil new potential with Postgres, this discussion will provide a glimpse into future opportunities for merging AI with the world’s leading database technology.
Alexander Kukushkin / Microsoft
Ever wondered if synchronous replication in PostgreSQL is too slow or if it always prevents data loss? Let's uncover the truth. Synchronous replication in PostgreSQL has been available for almost 15 years and provides a foundation for high availability with zero RPO. However, even experienced DBAs and application developers often don’t understand all the quirks and trade-offs of using it.
This session will explore myths and truths surrounding synchronous replication. Using real-world examples and tests, we'll separate facts from fiction and give clear explanations on how synchronous replication works and how to use it effectively.
David Pech
PostgreSQL clusters can encounter a multitude of complex states following a primary node switch—be it planned or due to unexpected incidents. In this in-depth session, we'll illustrate the various situations a cluster might find itself in after a failure. By manually mimicking the logic of tools like Patroni and PostgreSQL Operators, we'll step through failover and switchover processes to understand how these tools handle different failure scenarios. We'll showcase a spectrum of potential cluster states, from smooth planned switchovers to challenging split-brain situations, shedding light on the implications each has on your database environment. Through demonstrations, we'll explore recovery strategies using tools like pg_basebackup and pg_rewind, providing practical solutions to restore cluster operations.
Attendees will gain valuable insights into analyzing cluster status post-failure, understanding the nuances of each possible state, and implementing effective recovery plans. By the end of this session you’ll hopefully ditch your manual disaster recover process and switch to well-established tool that will do it for you significantly faster and better.
Dirk Krautschick / Aiven
Starting with an simple ask in my daily business just to compare the performance of PostgreSQL DBaaS offerings between different cloud vendors I've opened a door into an unexpected journey of way more effort, research and thoughts. However, achieving optimal performance from PostgreSQL requires a nuanced understanding of its behavior under different workloads. This talk chronicles an in-depth journey into PostgreSQL benchmarking, sharing lessons learned, unexpected challenges, and key insights gained along the way. We will explore my methodology used to design and execute meaningful benchmarks, including the selection of tools, configuration tuning, and workload modeling that aligns with real-world scenarios. Key takeaways will include strategies for interpreting benchmark results, common pitfalls to avoid, and the impact of factors such as parametrization and compute configurations.
The session also dives into the evolution of benchmarking practices, addressing questions like: What distinguishes synthetic benchmarks from real-world performance measurements? How do version upgrades and new features affect benchmarking strategies? Whether you are a database administrator, developer, or architect, this talk aims to equip you with practical insights and actionable techniques to master the art and science of PostgreSQL benchmarking, helping you unlock its full potential for your workloads.
Esther Minano / xata.io
Postgres is great at what it does (and that is a lot), but sometimes you need to extend its reach by using more specialised tools. Making the data available in near real time to those tools is something that Postgres already supports via replication. There are many tools to help you stream data out of Postgres built on replication, but most on the market don’t support evolving schemas. Database schemas have a tendency to change over time - if your change data capture tool doesn’t support replicating them, you risk data loss and manual intervention (along with late night pages and subsequent headaches) to fix your pipeline.
This talk presents a new approach to data replication, where schema changes are tracked in Postgres and automatically replicated alongside the data, forming a linearised schema change log that is then applied downstream. It covers the ideas behind this approach, as well as some sample use cases:
Gulcin Yildirim Jelinek / xata.io
In PostgreSQL, managing schema changes without downtime can be a challenging task. Table-level locks, which control access during Data Definition Language (DDL) operations like ALTER or DROP TABLE, can result in unintended application slowdowns or even service interruptions when not fully understood. This talk will provide a comprehensive dive into table-level locking and lock queueing in PostgreSQL, helping attendees gain the insights they need to perform efficient schema changes.
We’ll start by explaining the various types of table-level locks in PostgreSQL such as Access Share, Exclusive, and Access Exclusive and how they are automatically managed during common DDL operations. Then, we’ll break down lock queuing: how PostgreSQL organizes lock requests, what happens when transactions wait for locks, and how deadlocks can arise in complex environments. Next, we’ll focus on practical approaches to managing table-level locks for near-zero downtime. Attendees will learn techniques to minimize locking impact, including understanding lock conflicts, using online schema migration patterns, and identifying lock-heavy queries. We’ll introduce open-source tools like pgroll, which utilizes the expand/contract pattern to make schema changes in small, lock-free steps.
By the end of this session, attendees will be equipped with practical strategies and knowledge to control lock behavior during schema changes, ensuring data integrity and reducing operational disruptions. This talk will provide the tools needed to manage PostgreSQL schema changes with confidence and minimal impact on production environments.
Josef Machytka / NetApp
This talk shares practical insights from building a data lakehouse architecture using PostgreSQL, BigQuery, and Google Cloud Storage. Provides explanations of pros and cons of various data formats such as Parquet, or Avro. Explains how to effectively combine relational and non-relational data in PostgreSQL and discusses optimization techniques for scaling PostgreSQL to handle large datasets. And touches also the problem of maintaining robust data governance, ensuring compliance with privacy and security standards, and implementing data quality checks. Do not allow your data lake to turn into a dark swamp full of digital monsters!
Mayuresh Bagayatkar
PostgreSQL’s autovacuum tuning in the cloud era can feel like walking a tightrope, balancing application performance, Cloud costs, and database maintenance efficiency. This talk will unravel how to fine-tune autovacuum settings for peak efficiency across AWS, GCP, and Azure, despite throttled throughput and resource limits. We'll cover real-world tricks to boost Autovacuum efficiency without busting your cloud budget, turning the challenge of autovacuum tuning into an art form of juggling cloud FinOps, speed, and scalability.
Sadeq Dousti / Trade Republic
This is a 100% live demo on PostgreSQL's query planner and the crucial role of statistics in shaping its decision-making process. We'll explore the necessity of column statistics for query planning, and the types of statistics gathered such as the most common values, their frequencies, and histograms. The role of analyzing vs. vacuuming tables is illustrated through examples. Additionally, we'll examine the potential pitfalls of stale statistics and how they can lead to suboptimal query plans, leading nicely into autovacuum tuning. Finally, we'll delve into the world of multi-column statistics, a powerful tool for optimizing complex queries involving conditions on multiple columns. By the end of this session, attendees will gain a comprehensive understanding of PostgreSQL statistics and be equipped with the knowledge to fine-tune their databases for optimal performance.
Robert Haas / EDB
In this talk, I'll discuss the incremental backup feature which I developed for PostgreSQL 17. The talk will discuss how we determine what data has changed, and why the chosen approach was selected. It will then review in some detail how incremental backups can be taken and restored, and why things work as they do. It will briefly touch on use cases for the feature and possible future work in this area.
Ryan Booz / Redgate
Security is an essential, yet often misunderstood, part of effectively managing a PostgreSQL cluster. As the popularity and adoption of PostgreSQL continues to grow, the interaction of roles, privileges, and object ownership is a recurring theme of confusion in forums and mailing lists.
In this session we’ll demystify how ownership and permissions work in PostgreSQL so that you can more effectively manage your database landscape.
Somdyuti Paul
In the Advanced Performance Tuning talk I will discuss and demonstrate Performance Issues related to these 5 topics where each of the topic I will spend 6-7 minutes demonstrating a real-world example. a) How Auto Plan Cache Mode for Uniformly Distributed Data be detrimental to performance versus Generic Plans. b) How planning time is determined by plan_cache_mode, genetic query optimizer (GEQO) and Bind variables. c) Optimizing queries on Partitioned tables- what parameters to set; Best Practices to implement d) Optimizing Parallel queries and parallel execution. e) Why Full Index Scan is not beneficial always.