Thursday, July 31, 2025

How to kill a Postgres session

Well its probably really called a process on Postgres 
  
 pg_terminate_backend( pid integer, timeout bigint DEFAULT 0 ); 
   
  We can find the PID by using the following query on everyone favourite view pg_stat_activity. I added the time_in_state column to show how long the session has been in its current state.  
 
=> SELECT pid, usename, client_addr, application_name, state, wait_event, now() - state_change AS time_in_state FROM pg_stat_activity order by time_in_state desc;
 
  Once we decide on the pid to kill. pass the PID and the TIMEOUT you want.  
 
=> SELECT pg_terminate_backend(5866,3000); pg_terminate_backend ---------------------- t (1 row)
 
  Why is the timeout important? Looking at the docs  
   
  "If timeout is not specified or zero, this function returns true whether the process actually terminates or not, indicating only that the sending of the signal was successful. "  
   
  So we should add a timeout in ms, so when know it really works.  
   
  "If the timeout is specified (in milliseconds) and greater than zero, the function waits until the process is actually terminated or until the given time has passed. If the process is terminated, the function returns true. On timeout, a warning is emitted and false is returned."  
   
  https://www.postgresql.org/docs/15/functions-admin.html  
   
  ****************************************** 
  keywords: postgres kill session  
  ****************************************** 
  rdbms version: 15  
  ****************************************** 
 

Tuesday, July 15, 2025

Aurora Postgres resource metrics by database

Which database in Aurora Postgres is doing the most work (physical reads, ins, upds, dels, activity etc) Useful when you want to know since startup time, which database is doing the most work.

The below query is sorted by blks_read (physical reads), you can sort it by any column depending where your bottle neck is.

select datname, xact_commit as commits, xact_rollback as rollbacks, blks_read, blks_hit, tup_inserted as ins, tup_updated as upds, tup_deleted as dels, blk_read_time as blk_r_time, blk_write_time as blk_w_time, active_time, storage_blks_read, local_blks_read, storage_blk_read_time from aurora_stat_database() order by blks_read desc;



****************************************** 
keywords: aurora postgres 
****************************************** 
rdbms version: Aurora Postgres 15.5 
******************************************

Thursday, July 10, 2025

Postgres Performance and Dead Tuple Bloat

When an existing row is updated or deleted in a table, it results in new version of the row, while also keeping the previous version of the row for read consistency. Once the consistenant version (or old version) of the row is no longer needed, the old version becomes a “dead tuple”. These dead tuples are normally removed by the autovacuum process. When the rate of dead tuples being created exceeds the database’s ability to clean up dead tuples automatically, bloat occurs. Why does bloat affect performance IO and Pages - Postgres stores table rows in pages on disk. Each page contains many rows. The live tuples and the dead tuples are stored together on the same pages, inter mixed with each other. This results in during query execution, PostgreSQL loads dead tuples into memory, since the dead tuples are intermixed on the pages with live tuples. Increasing disk IO and hurting performance because the DBMS retrieves useless data. Indexes - Indexes are not always directly aware there might be multiple versions of the same logical row (dead tuples). To an index, each tuple is an independent object that needs its own index entry. The index entries need to be scanned and the table tuple looked at to see if it a current row. (exception, Heap-Only Tuples). Thus resulting in additional IO. Optimizer Statistics - When analyze is run the bloat in a table can skew the statistics that are gathered. How to determine if there is a large amount of bloat for a specific table: Compare the number of rows in the table compared to n_dead_tup or look for a big number of n_dead_tup. Live rows vs dead rows:
=> select relname , n_tup_ins, n_tup_upd, n_tup_del, n_live_tup , n_dead_tup, last_vacuum, last_autovacuum from pg_stat_user_tables where relname = 'dba_bloat_test'; relname | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup | n_dead_tup | last_vacuum | last_autovacuum ----------------+-----------+-----------+-----------+------------+------------+-------------+----------------- dba_bloat_test | 64 | 0 | 0 | 64 | 0 | | (1 row)
Example of bloat in the table Notice that the live tuples is a lot less than the dead tuples. Also check the last_vacuum and last_autovacuum columns.
relname | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup | n_dead_tup | last_vacuum | last_autovacuum ------------+-----------+-----------+-----------+------------+------------+-------------------------------+------------------------------- pc_message | 54341382 | 55358532 | 54348018 | 129543 | 6113696 | 2023-11-05 00:59:56.698039-05 | 2023-11-08 14:06:51.683421-06 (1 row)
Using Vacuum: If it can be seen that autovacuum is not able to remove dead tuples, and the n_dead_tup values is very large, then a manual vacuum can be run. vacuum (verbose) table_name From the output of the command, check the INFO: detail to confirm if dead tuples were removed. Example of dead tuples not being removed.
=> \c database_name => vacuum (verbose) table_name INFO: vacuuming "public.pc_message" INFO: launched 2 parallel vacuum workers for index cleanup (planned: 2) INFO: "pc_message": found 0 removable, 6307536 nonremovable row versions in 762327 out of 766108 pages DETAIL: 6179913 dead row versions cannot be removed yet, oldest xmin: 3268047052 There were 3452378 unused item identifiers. Skipped 8 pages due to buffer pins, 3683 frozen pages. 0 pages are entirely empty. CPU: user: 2.41 s, system: 0.70 s, elapsed: 3.11 s. INFO: vacuuming "pg_toast.pg_toast_306812340" INFO: "pg_toast_306812340": found 0 removable, 2547494 nonremovable row versions in 622648 out of 623133 pages DETAIL: 2536488 dead row versions cannot be removed yet, oldest xmin: 3268047052 There were 295255 unused item identifiers. Skipped 0 pages due to buffer pins, 463 frozen pages. 0 pages are entirely empty. CPU: user: 1.51 s, system: 0.00 s, elapsed: 1.51 s. VACUUM
Note: The VACUUM command makes the space from dead tuples removed from a table. It may or may not remove dead index entries from indexes (based on the Postgres version and parameters), recreate them using the REINDEX CONCURRENTLY command to remove dead index entries. To rebuild all indexes on a table run REINDEX CONCURRENTLY table_name;. To rebuild a particular index, run REINDEX CONCURRENTLY index_name. Possible Causes Long running Transactions: A transaction might be held open (either active or idle in transaction) and causing the dead tuples to be kept for consistent reads
SELECT pid, datname, usename, state, backend_xmin, backend_xid FROM pg_stat_activity WHERE backend_xmin IS NOT NULL OR backend_xid IS NOT NULL ORDER BY greatest(age(backend_xmin), age(backend_xid)) DESC;
You can use the pg_terminate_backend() function to terminate the database session that is blocking your VACUUM. Orphaned Replication slots Sometimes CDA leaves the poor little replication slots behind. A replication slot is a data structure that keeps the PostgreSQL server from discarding information that is still needed by a standby server to catch up with the primary. If replication is delayed or the standby server is down, the replication slot will prevent VACUUM from deleting old rows.
SELECT slot_name, slot_type, database, xmin, active, catalog_xmin FROM pg_replication_slots ORDER BY age(xmin) DESC; slot_name | slot_type | database | xmin | active | catalog_xmin -----------------------------------+-----------+------------------------------+------+--------+-------------- s017379ef6c5e487fae8be1fff2bac66d | logical | test1_cc | | f | 447041325
Use the pg_drop_replication_slot() function to drop replication slots that are no longer needed. For logical replication it normaly only affects system catalogs. Examine the column catalog_xmin . ****************************************** keywords: ****************************************** rdbms version: ******************************************

Postgres bloat query

When an existing row is updated or deleted in a table, it results in new version of the row, while also keeping the previous version of the row for read consistency. Once the consistenant version (or old version) of the row is no longer needed, the old version becomes a “dead tuple”. These dead tuples are normally removed by the autovacuum process. But for other reason it may not be.

SELECT relname AS table_name, n_live_tup, n_dead_tup, ROUND((n_dead_tup::numeric / NULLIF(n_live_tup, 0)) * 100, 2) AS dead_tuple_pct, last_autovacuum, last_vacuum FROM pg_stat_user_tables WHERE n_live_tup > 0 AND (n_dead_tup::numeric / n_live_tup) > 0.10 ORDER BY dead_tuple_pct DESC;

****************************************** 
keywords:Postgres bloat 
****************************************** 
rdbms version: Postgres v15 
******************************************

Thursday, June 26, 2025

Helpful pg_stat_activity queries Show all sessions ordered by state
select pid, usename, application_name, wait_event_type, wait_event, state from pg_stat_activity order by state;
Active sessions and the query they are running
select pid, usename, application_name, wait_event_type, wait_event, state, query from pg_stat_activity where state = 'active'
basic query for pg_stat_statements
Select userid, substring(query,1,80), calls, mean_time, rows/calls from pg_stat_statements
****************************************** keywords: pg_stat ****************************************** rdbms version: postgres 15 ******************************************

Thursday, May 22, 2025

Postgres - oh no, I'm not about to post about Postgres

Well my job now envolves a lot of Postgres so.... More to come.