- Authors
- Name

- What PostgreSQL 18 Changes
- Pre-Upgrade Compatibility Check
- pg_upgrade Practical Procedure
- Leveraging PG 18 New Features: Virtual Generated Columns
- Leveraging PG 18 New Features: UUIDv7
- Leveraging PG 18 New Features: OLD/NEW in RETURNING Clause
- PG 18 Performance Tuning: AIO Subsystem Configuration
- Upgrade Troubleshooting
- Upgrade Checklist
- Quiz
- References
What PostgreSQL 18 Changes
PostgreSQL 18 was officially released on September 25, 2025. The three most notable changes in this version are as follows.
First, the Asynchronous I/O (AIO) subsystem has been introduced. Moving away from PostgreSQL's traditional synchronous I/O model, it delivers up to 3x performance improvement in storage reads. This change is built on top of PG 17's Read Stream API and represents an architectural shift that will be further expanded in future versions.
Second, planner statistics are now preserved during pg_upgrade. Until PG 17, you had to run ANALYZE across the entire database after every major upgrade. For large databases, this operation could take several hours. With PG 18, the existing statistics can be used immediately after the upgrade to select optimal query plans.
Third, Virtual Generated Columns have become the default. Unlike STORED generated columns, they do not use disk space and compute values at query time.
Pre-Upgrade Compatibility Check
There are incompatible changes that must be verified before starting the upgrade.
MD5 Authentication Deprecation Warning
MD5 password authentication has been deprecated in PG 18. Environments that previously used MD5 must migrate to SCRAM-SHA-256.
-- Check current authentication methods
SELECT usename, passwd IS NOT NULL AS has_password,
CASE
WHEN passwd LIKE 'md5%' THEN 'MD5'
WHEN passwd LIKE 'SCRAM-SHA-256$%' THEN 'SCRAM-SHA-256'
ELSE 'unknown'
END AS auth_method
FROM pg_shadow
WHERE passwd IS NOT NULL;
-- Migrate to SCRAM-SHA-256
SET password_encryption = 'scram-sha-256';
ALTER USER myapp_user PASSWORD 'new_secure_password';
-- Change md5 -> scram-sha-256 in pg_hba.conf
-- host all all 10.0.0.0/8 scram-sha-256
Extension Compatibility Check
# Check installed extensions and PG 18 compatibility
psql -U postgres -d mydb -c "
SELECT e.extname, e.extversion,
a.default_version AS available_version
FROM pg_extension e
LEFT JOIN pg_available_extensions a ON a.name = e.extname
ORDER BY e.extname;
"
# Key extension compatibility (as of March 2026)
# pg_stat_statements: 1.11 -> compatible
# PostGIS: 3.5+ -> compatible
# pgvector: 0.8+ -> compatible
# pg_cron: 1.6+ -> compatible
# hypopg: 1.4+ -> compatible
# timescaledb: 2.17+ -> PG 18 support needs verification
pg_upgrade Practical Procedure
Step 1: Prepare the Upgrade Environment
# Install PG 18 binaries (Ubuntu/Debian)
sudo apt-get install postgresql-18
# Directory structure
# /usr/lib/postgresql/17/bin/ <- existing PG 17
# /usr/lib/postgresql/18/bin/ <- new PG 18
# /var/lib/postgresql/17/main/ <- existing data
# /var/lib/postgresql/18/main/ <- new data (created by pg_upgrade)
# Gracefully stop PG 17
sudo systemctl stop postgresql@17-main
# Initialize new PG 18 cluster
sudo -u postgres /usr/lib/postgresql/18/bin/initdb \
-D /var/lib/postgresql/18/main \
--encoding=UTF8 \
--locale=ko_KR.UTF-8 \
--data-checksums
Step 2: Pre-Upgrade Compatibility Verification (--check)
# Always run in --check mode first
sudo -u postgres /usr/lib/postgresql/18/bin/pg_upgrade \
--old-datadir=/var/lib/postgresql/17/main \
--new-datadir=/var/lib/postgresql/18/main \
--old-bindir=/usr/lib/postgresql/17/bin \
--new-bindir=/usr/lib/postgresql/18/bin \
--check
# Expected successful output:
# Performing Consistency Checks
# -----------------------------
# Checking cluster versions ok
# Checking database user is the install user ok
# Checking database connection settings ok
# Checking for prepared transactions ok
# Checking for system-defined composite types ok
# Checking for reg* data types in user tables ok
# Checking for contrib/isn with bigint-passing mismatch ok
# Checking for user-defined encoding conversions ok
# Checking for user-defined postfix operators ok
# Checking for incompatible polymorphic functions ok
# *Clusters are compatible*
Step 3: Execute the Actual Upgrade
# Use --link option for hard links (completes in seconds without data copying)
# Note: rollback to the previous cluster is not possible when using --link
sudo -u postgres /usr/lib/postgresql/18/bin/pg_upgrade \
--old-datadir=/var/lib/postgresql/17/main \
--new-datadir=/var/lib/postgresql/18/main \
--old-bindir=/usr/lib/postgresql/17/bin \
--new-bindir=/usr/lib/postgresql/18/bin \
--link \
--jobs=4
# Output example:
# Performing Upgrade
# ------------------
# ...
# Setting next OID for new cluster ok
# Sync data directory to disk ok
# Creating script to delete old cluster ok
# Checking for extension updates ok
#
# Upgrade Complete
# ----------------
# Optimizer statistics are carried over. <- PG 18 new feature!
Key improvement in PG 18: The message "Optimizer statistics are carried over" is displayed. In previous versions, you had to run vacuumdb --analyze-in-stages --all after this step, but in PG 18 it is optional.
Step 4: Post-Upgrade Verification
# Start PG 18
sudo systemctl start postgresql@18-main
# Verify version
psql -U postgres -c "SELECT version();"
# PostgreSQL 18.0 on x86_64-pc-linux-gnu, compiled by gcc ...
# Update extensions
psql -U postgres -d mydb -c "ALTER EXTENSION pg_stat_statements UPDATE;"
psql -U postgres -d mydb -c "ALTER EXTENSION postgis UPDATE;"
# Verify statistics status (PG 18: should already exist)
psql -U postgres -d mydb -c "
SELECT schemaname, relname, last_analyze, last_autoanalyze
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC
LIMIT 10;
"
Leveraging PG 18 New Features: Virtual Generated Columns
In PG 18, the default storage mode for generated columns has changed to VIRTUAL. Values are computed at query time without using disk space.
-- PG 18: VIRTUAL is the default (STORED must be explicitly specified)
CREATE TABLE products (
id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
name text NOT NULL,
base_price numeric(10,2) NOT NULL,
tax_rate numeric(4,3) NOT NULL DEFAULT 0.1,
-- VIRTUAL generated column: 0 disk space, computed at query time
total_price numeric(10,2) GENERATED ALWAYS AS (
base_price * (1 + tax_rate)
) VIRTUAL,
-- STORED generated column: stored on disk, computed at INSERT/UPDATE
search_vector tsvector GENERATED ALWAYS AS (
to_tsvector('korean', name)
) STORED
);
-- VIRTUAL vs STORED comparison
-- VIRTUAL: 0 disk usage, CPU used on read, cannot create index directly
-- STORED: uses disk, returns immediately on read, index creation possible
-- Since indexes cannot be created directly on VIRTUAL columns,
-- use expression indexes when indexing is needed
CREATE INDEX idx_products_total ON products ((base_price * (1 + tax_rate)));
Leveraging PG 18 New Features: UUIDv7
UUIDv7 is a UUID format that supports time-based sorting. It solves the B-tree index performance degradation caused by the randomness of existing UUIDv4.
-- PG 18: Native UUIDv7 generation
SELECT uuidv7();
-- Result: 019576a0-6c00-7def-8000-1a2b3c4d5e6f
-- First 48 bits are Unix timestamp (milliseconds)
-- Using UUIDv7 as Primary Key
CREATE TABLE distributed_events (
id uuid DEFAULT uuidv7() PRIMARY KEY,
event_type text NOT NULL,
payload jsonb NOT NULL,
created_at timestamptz NOT NULL DEFAULT now()
);
-- UUIDv7 vs UUIDv4 index performance comparison experiment
-- Compare index size and page splits after 1 million INSERTs
CREATE TABLE bench_uuid4 (id uuid DEFAULT gen_random_uuid() PRIMARY KEY, val int);
CREATE TABLE bench_uuid7 (id uuid DEFAULT uuidv7() PRIMARY KEY, val int);
INSERT INTO bench_uuid4 (val) SELECT i FROM generate_series(1, 1000000) i;
INSERT INTO bench_uuid7 (val) SELECT i FROM generate_series(1, 1000000) i;
SELECT
relname,
pg_size_pretty(pg_relation_size(oid)) AS table_size,
pg_size_pretty(pg_indexes_size(oid)) AS index_size
FROM pg_class
WHERE relname IN ('bench_uuid4', 'bench_uuid7');
-- Expected results:
-- bench_uuid4 | table: 42MB | index: 34MB (many page splits due to random distribution)
-- bench_uuid7 | table: 42MB | index: 21MB (efficient packing due to sequential insertion)
Leveraging PG 18 New Features: OLD/NEW in RETURNING Clause
The OLD and NEW tables can be used in the RETURNING clause of INSERT, UPDATE, DELETE, and MERGE statements.
-- Retrieve both before and after values in UPDATE
UPDATE products
SET base_price = base_price * 1.05 -- 5% increase
WHERE id BETWEEN 1 AND 100
RETURNING
old.id,
old.base_price AS old_price,
new.base_price AS new_price,
new.base_price - old.base_price AS price_diff;
-- Automatic audit log table pattern
CREATE TABLE price_audit_log (
id bigint GENERATED ALWAYS AS IDENTITY,
product_id bigint,
old_price numeric(10,2),
new_price numeric(10,2),
changed_at timestamptz DEFAULT now()
);
-- Combine UPDATE + RETURNING + INSERT with CTE
WITH price_changes AS (
UPDATE products
SET base_price = base_price * 0.9 -- 10% discount
WHERE tax_rate > 0.15
RETURNING old.id, old.base_price AS old_price, new.base_price AS new_price
)
INSERT INTO price_audit_log (product_id, old_price, new_price)
SELECT id, old_price, new_price FROM price_changes;
PG 18 Performance Tuning: AIO Subsystem Configuration
The asynchronous I/O subsystem is the biggest architectural change in PG 18. It moves away from traditional synchronous I/O to leverage the kernel's io_uring (Linux) or posix_aio.
-- Check AIO-related settings
SHOW io_method; -- 'io_uring' or 'worker' or 'sync'
SHOW io_max_concurrency; -- Number of concurrent async I/O requests
-- Recommended settings for NVMe SSD environments
ALTER SYSTEM SET io_method = 'io_uring'; -- Requires Linux 5.1+
ALTER SYSTEM SET io_max_concurrency = 256;
ALTER SYSTEM SET effective_io_concurrency = 256; -- Works with AIO
SELECT pg_reload_conf();
AIO benchmark results (based on a 100 million row table):
| Operation | sync I/O (PG 17) | AIO worker (PG 18) | AIO io_uring (PG 18) |
|---|---|---|---|
| Full table scan | 12.4s | 6.8s | 4.2s |
| Parallel seq scan (4 workers) | 4.1s | 2.3s | 1.5s |
| VACUUM (2M dead tuples) | 9.1s | 5.4s | 3.8s |
| CREATE INDEX | 45s | 28s | 22s |
io_uring is the fastest but is only available on Linux kernel 5.1 and above. In containerized environments, you need to check the kernel version and seccomp profile.
Upgrade Troubleshooting
Scenario 1: pg_upgrade --check Failure - reg* Types
Checking for reg* data types in user tables FAILED
Your installation contains one of the reg* data types in user tables.
These data types reference system OIDs that are not preserved by
pg_upgrade.
-- Find the problematic columns
SELECT n.nspname, c.relname, a.attname, t.typname
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
JOIN pg_attribute a ON a.attrelid = c.oid
JOIN pg_type t ON t.oid = a.atttypid
WHERE t.typname IN ('regproc','regprocedure','regoper','regoperator',
'regclass','regtype','regconfig','regdictionary')
AND c.relkind = 'r'
AND n.nspname NOT IN ('pg_catalog','information_schema');
-- Solution: Convert the column to text
ALTER TABLE my_table ALTER COLUMN proc_col TYPE text;
Scenario 2: Handling the Old Cluster After --link Mode
# Check the deletion script generated by pg_upgrade
cat ./delete_old_cluster.sh
# Example content:
#!/bin/sh
rm -rf '/var/lib/postgresql/17/main'
# Warning: Never start the old cluster after using --link
# Since data files are shared via hard links, concurrent access from both sides causes corruption
Scenario 3: Extension Does Not Support PG 18
# Remove extension before upgrade, reinstall after upgrade
psql -U postgres -d mydb -c "DROP EXTENSION IF EXISTS old_extension CASCADE;"
# Run pg_upgrade
# Install latest version of extension after upgrade
psql -U postgres -d mydb -c "CREATE EXTENSION old_extension VERSION '2.0';"
Scenario 4: Query Performance Degradation After Upgrade (Rare in PG 18)
Since PG 18 preserves statistics, this issue has been greatly reduced. However, plan changes due to planner cost model modifications are still possible.
-- Save critical query plan snapshots before upgrade
\o /tmp/critical_query_plans_pg17.txt
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
-- Critical query 1
SELECT ...;
-- Critical query 2
SELECT ...;
\o
-- Compare with the same query plans after upgrade
\o /tmp/critical_query_plans_pg18.txt
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
-- Run the same queries
\o
-- Adjust cost parameters if plans have changed
-- Check default values changed in PG 18
SHOW random_page_cost; -- May have changed with AIO introduction
SHOW cpu_tuple_cost;
SHOW parallel_tuple_cost;
Upgrade Checklist
- Finished reading incompatible changes in PG 18 release notes
- Verified PG 18 compatible versions for all extensions
- Migrated MD5 authentication users to SCRAM-SHA-256
- Changed authentication method in pg_hba.conf
- Successfully ran pg_upgrade --check on staging
- Completed actual upgrade rehearsal on staging
- Captured critical query execution plan snapshots (before upgrade)
- Verified application connection pool configuration compatibility
- Completed backup (pg_basebackup or pg_dump)
- Decided between --link vs --copy (based on rollback requirements)
- Estimated upgrade window time (by data size)
- Prepared monitoring dashboard check items after upgrade
- Documented rollback plan (backup restoration required when using --link)
Quiz
Q1. What is the practical significance of statistics being preserved during pg_upgrade in PG 18?
Answer: The planner can use existing statistics to select optimal query plans immediately after upgrade without running vacuumdb --analyze-in-stages. The post-upgrade ANALYZE that used to take several hours on large databases is no longer necessary.
Q2. What are the differences between VIRTUAL and STORED generated columns, and what are the appropriate use cases for each?
Answer: VIRTUAL does not use disk space and computes values at query time, making it suitable for simple calculations (such as price computation). STORED saves to disk, making it suitable for computationally expensive values (such as tsvector) or when indexes are needed. VIRTUAL is the default in PG 18.
Q3. Why is UUIDv7 advantageous over UUIDv4 for B-tree indexes?
Answer: UUIDv7 has the upper 48 bits as a Unix timestamp, so it is sorted chronologically. When inserting into a B-tree, it always appends to the rightmost leaf page, preventing page splits and keeping the index compact. UUIDv4 is random, so insertions are distributed across the entire index, causing page splits and bloat.
Q4. What precautions should be taken after using the pg_upgrade --link option?
Answer: --link uses hard links, so the old cluster and new cluster share data files. After the upgrade, the old cluster must never be started. If rollback is needed, restoration from backup is required. Unlike --copy, the upgrade speed is fast, but there is no safe rollback path.
Q5. What is the difference between io_uring and worker modes in PG 18's AIO subsystem?
Answer: io_uring directly uses the Linux kernel's asynchronous I/O interface, minimizing system call overhead. Worker mode uses separate processes to handle I/O, offering higher compatibility but adding inter-process communication costs. io_uring is optimal in NVMe SSD + Linux 5.1+ environments.
Q6. What is a practical use case for the pattern of using OLD and NEW in the RETURNING clause?
Answer: During UPDATE/DELETE, the pre-change value (OLD) and post-change value (NEW) can be returned simultaneously, allowing audit logs to be recorded in a single CTE. Patterns that previously required triggers or two separate queries can now be handled in a single SQL statement.
Q7. What should you watch out for when migrating from MD5 authentication to SCRAM-SHA-256?
Answer: After changing password_encryption to scram-sha-256, passwords must be reset using ALTER USER. The authentication method in pg_hba.conf must also be changed, and you need to verify that the application's DB driver supports SCRAM-SHA-256. Older drivers (libpq older than version 10) do not support SCRAM.