Monday, October 18, 2021

Detect slow PostgreSQL queries, pg_stat vs auto logging overhead.

 Detect slow PostgreSQL queries, pg_stat vs auto logging overhead


To be able to find and detect slow queries on a RDBMS PostgreSQL is an important thing as the database tends to grow. Optimizing performance and expensive SQL queries is one of the major task of the database system maintainers. These problems can be approached with many ways and external systems, but I have tried to obtain the fast and “keep it simple” way, which I will try to explain in this blog post.

source: stackify

One of the cool PostgreSQL features is the auto_explain module , which provides a means for logging execution plans of slow statements automatically, without having to run EXPLAIN by hand (stated by the documentation). It is also stated in the documentation that there is a price in overhead for that. I would suggest not using the excessive logging from auto_explain on production systems.

The alternative that I am using often is the pg_stat_statements module which can be easily configured on a production or test system.


After that we need to run CREATE EXTENSION pg_stat_statements in our databases, which will allow PostgreSQL to create a view for us.

SELECT * FROM pg_stat_statement

The created view is able to tell us, which query has been executed how often and show us insights about the total runtime of this type of query as well as about the distribution of runtimes for those particular queries.



This allows me to get a quick overview of the CPU percentage being used and a quick overview of the I/O behavior of many types of queries, which could be a great deal of the reason that causes high loads on your production systems.