Skip to content
Published on

MySQL 8.4 InnoDB Cluster Operations and Query Optimization Handbook

Authors
  • Name
    Twitter
MySQL 8.4 InnoDB Cluster Operations and Query Optimization Handbook

What MySQL 8.4 LTS Means

MySQL 8.4 is Oracle's officially designated Long-Term Support (LTS) release. Since its GA in April 2024, patches from 8.4.4 through 8.4.8 have been steadily released throughout 2025-2026, and MySQL 8.0 is approaching its EOL in 2026. Migrating to 8.4 in production environments is no longer optional -- it is essential.

Let's first summarize the key changes in 8.4.

Summary of Major Changes

  • mysql_native_password disabled by default: Starting from 8.4.0, it is no longer loaded by default. caching_sha2_password is the default authentication plugin. If you have legacy apps, you must explicitly enable it with --mysql-native-password=ON.
  • Complete removal of MASTER/SLAVE terminology: CHANGE MASTER TO, START SLAVE, SHOW SLAVE STATUS, etc. now produce syntax errors. You must switch to CHANGE REPLICATION SOURCE TO, START REPLICA, SHOW REPLICA STATUS.
  • mysqlpump removed: Use mysqldump or MySQL Shell's dump utilities instead.
  • InnoDB Adaptive Hash Index (AHI) disabled by default: Performance may vary depending on workload. Enable it manually with innodb_adaptive_hash_index=ON if needed.
  • InnoDB Change Buffer disabled by default: Disabling it is actually beneficial for SSD-based storage.
  • Automatic histogram updates: Improves optimizer statistics accuracy for better query execution plans. Histogram metadata persists even after server restarts.
  • restrict_fk_on_non_standard_key=ON by default: Using non-unique or partial keys as foreign keys is now blocked by default.

InnoDB Cluster Architecture

An InnoDB Cluster consists of three core components:

  1. MySQL Group Replication: A synchronous replication layer using a Paxos-based consensus protocol. It ensures data consistency and automatic failover.
  2. MySQL Shell (AdminAPI): A management interface for creating clusters, adding/removing nodes, and monitoring status.
  3. MySQL Router: Middleware that routes application traffic to the appropriate nodes. It distributes writes to the Primary and reads to Secondaries.
┌─────────────────────────────────────────────────┐
Application└──────────────────────┬──────────────────────────┘
              ┌────────▼────────┐
MySQL Router                (R/W split)              └───┬────────┬────┘
         Write    │        │   Read
         ┌────────▼──┐  ┌──▼────────┐
Primary   │  │ Secondary           (node-1) (node-2)         └────────────┘  └───────────┘
                  │           │
              ┌───▼───────────▼───┐
Secondary                  (node-3)              └───────────────────┘
Group Replication (Paxos)

A minimum of 3 nodes is required, with support for up to 9 nodes. Using an odd number of nodes (3, 5, 7) is necessary to prevent split-brain during quorum calculations.

InnoDB Cluster vs NDB Cluster Comparison

Here is a comparison table to help decide which clustering solution to choose for your production environment.

ItemInnoDB ClusterNDB Cluster
Storage EngineInnoDB (disk-based)NDB (in-memory + disk)
Replication MethodGroup Replication (Paxos)Synchronous 2-phase commit
ShardingNot supported (manual partitioning required)Automatic sharding (Node Group)
Max Nodes9255 (144 data nodes)
Query ExecutionSingle-threadedPush-down to data nodes possible
Fault ToleranceMajority of N nodes must surviveCan operate with 1 replica per Node Group
Suitable WorkloadsGeneral OLTP, read scalingUltra-low latency, write-intensive, telecom/finance
Operational ComplexityLow (managed via MySQL Shell)High (requires separate Management Node)
Foreign Key SupportFull supportLimited
Transaction IsolationAll levels supportedREAD COMMITTED only

For most web services, SaaS backends, and general OLTP workloads, InnoDB Cluster is the practical choice in terms of operational complexity and features. NDB Cluster is suited for telecom and financial real-time systems requiring ultra-low latency with hundreds of thousands of TPS or more.

InnoDB Cluster Setup in Practice

Prerequisites

Install the same MySQL 8.4 version on all nodes and verify the following conditions:

  • Confirm all tables use the InnoDB engine (convert any MyISAM tables beforehand)
  • Confirm all tables have a PRIMARY KEY (required for Group Replication)
  • Enable GTID (gtid_mode=ON, enforce_gtid_consistency=ON)
  • Set a unique server_id on each node
  • Register all node hostnames in /etc/hosts (to ensure consistent DNS resolution)
# Install MySQL 8.4 on each node (Ubuntu/Debian example)
sudo apt-get update
sudo apt-get install mysql-server-8.4 mysql-shell mysql-router

# Check for tables not using InnoDB engine
mysql -u root -p -e "
  SELECT TABLE_SCHEMA, TABLE_NAME, ENGINE
  FROM information_schema.TABLES
  WHERE ENGINE != 'InnoDB'
    AND TABLE_SCHEMA NOT IN ('mysql','information_schema','performance_schema','sys');
"

# Check for tables without PRIMARY KEY
mysql -u root -p -e "
  SELECT t.TABLE_SCHEMA, t.TABLE_NAME
  FROM information_schema.TABLES t
  LEFT JOIN information_schema.TABLE_CONSTRAINTS c
    ON t.TABLE_SCHEMA = c.TABLE_SCHEMA
    AND t.TABLE_NAME = c.TABLE_NAME
    AND c.CONSTRAINT_TYPE = 'PRIMARY KEY'
  WHERE c.TABLE_NAME IS NULL
    AND t.TABLE_SCHEMA NOT IN ('mysql','information_schema','performance_schema','sys')
    AND t.TABLE_TYPE = 'BASE TABLE';
"

my.cnf Configuration (common across all nodes)

[mysqld]
# Basic settings
server_id=1                          # Unique value per node (1, 2, 3)
bind-address=0.0.0.0
port=3306
datadir=/var/lib/mysql

# Enable GTID
gtid_mode=ON
enforce_gtid_consistency=ON

# Binary Log
log_bin=mysql-bin
binlog_format=ROW
binlog_transaction_dependency_tracking=WRITESET
log_replica_updates=ON

# Group Replication basic settings
plugin_load_add=group_replication.so
group_replication_group_name="aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
group_replication_start_on_boot=OFF
group_replication_local_address="node1:33061"
group_replication_group_seeds="node1:33061,node2:33061,node3:33061"
group_replication_bootstrap_group=OFF

# InnoDB tuning
innodb_buffer_pool_size=4G           # 70~80% of physical memory
innodb_buffer_pool_instances=4       # buffer_pool_size / 1G
innodb_log_file_size=1G
innodb_flush_log_at_trx_commit=1     # Data safety first
innodb_flush_method=O_DIRECT

# Default-disabled options to consider enabling based on workload
# innodb_adaptive_hash_index=ON      # Enable for OLTP hotspot queries
# innodb_change_buffering=all        # Enable for HDD usage

# Replication-related
replica_parallel_type=LOGICAL_CLOCK
replica_parallel_workers=4           # 4~8 recommended
replica_preserve_commit_order=ON

# Storage engine restriction (prevent non-InnoDB usage)
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"

# Transaction isolation level (for Multi-Primary mode)
# transaction_isolation=READ-COMMITTED

Creating the Cluster with MySQL Shell

// Connect to MySQL Shell (mysqlsh)
// mysqlsh root@node1:3306

// 1. Verify and auto-configure each instance
dba.configureInstance('root@node1:3306', {
  clusterAdmin: 'clusteradmin',
  clusterAdminPassword: 'SecureP@ss!2026',
})
dba.configureInstance('root@node2:3306', {
  clusterAdmin: 'clusteradmin',
  clusterAdminPassword: 'SecureP@ss!2026',
})
dba.configureInstance('root@node3:3306', {
  clusterAdmin: 'clusteradmin',
  clusterAdminPassword: 'SecureP@ss!2026',
})

// 2. Create the cluster on the first node (Bootstrap)
shell.connect('clusteradmin@node1:3306')
var cluster = dba.createCluster('prodCluster', {
  multiPrimary: false, // Single-Primary mode recommended
  memberWeight: 50,
  expelTimeout: 5,
  autoRejoinTries: 3,
  consistency: 'BEFORE_ON_PRIMARY_FAILOVER',
})

// 3. Add remaining nodes (Clone method recommended)
cluster.addInstance('clusteradmin@node2:3306', {
  recoveryMethod: 'clone',
})
cluster.addInstance('clusteradmin@node3:3306', {
  recoveryMethod: 'clone',
})

// 4. Check cluster status
cluster.status()

Specifying recoveryMethod: 'clone' uses a physical snapshot from an existing cluster member to fully synchronize data on the new node. The Clone method is safer when data volume is large or binary log retention period is short.

MySQL Router Configuration

# Router bootstrap (auto-configures cluster metadata)
mysqlrouter --bootstrap clusteradmin@node1:3306 \
  --directory /opt/mysqlrouter \
  --user=mysqlrouter \
  --conf-use-sockets \
  --conf-bind-address=0.0.0.0

# Start Router
/opt/mysqlrouter/start.sh

# Or manage via systemd
sudo systemctl enable mysqlrouter
sudo systemctl start mysqlrouter

Once the Router is bootstrapped, it uses the following default ports:

  • 6446: R/W port (routes to Primary)
  • 6447: R/O port (routes to Secondaries, round-robin)
  • 6448: R/W X Protocol port
  • 6449: R/O X Protocol port

Simply change your application's DB connection string to point to the Router host's port 6446 (write) or 6447 (read).

Cluster Operations and Monitoring

Daily Operations Commands

// Get the cluster object in MySQL Shell
shell.connect('clusteradmin@node1:3306')
var cluster = dba.getCluster()

// Check overall cluster status
cluster.status({ extended: 1 })

// Detailed status for a specific node
cluster.status({ extended: 2 })

// Temporarily remove a node (for maintenance)
cluster.removeInstance('clusteradmin@node2:3306', { force: false })

// Re-add a node
cluster.addInstance('clusteradmin@node2:3306', { recoveryMethod: 'clone' })

// View cluster options
cluster.options()

// Manual Primary switchover (graceful)
cluster.setPrimaryInstance('clusteradmin@node2:3306')

// Check Router connection status
cluster.listRouters()

Essential Monitoring Queries

These are metrics that must be checked periodically during operations.

-- Check Group Replication member status
SELECT MEMBER_ID, MEMBER_HOST, MEMBER_PORT, MEMBER_STATE, MEMBER_ROLE
FROM performance_schema.replication_group_members;

-- Check transaction apply lag (run on each Secondary)
SELECT
  CHANNEL_NAME,
  COUNT_TRANSACTIONS_IN_QUEUE AS trx_in_queue,
  COUNT_TRANSACTIONS_CHECKED AS trx_checked,
  COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE AS remote_queue,
  LAST_CONFLICT_FREE_TRANSACTION
FROM performance_schema.replication_group_member_stats
WHERE CHANNEL_NAME = 'group_replication_applier';

-- Check Applier worker status
SELECT
  WORKER_ID, LAST_SEEN_TRANSACTION,
  APPLYING_TRANSACTION, LAST_APPLIED_TRANSACTION
FROM performance_schema.replication_applier_status_by_worker
WHERE CHANNEL_NAME = 'group_replication_applier';

-- Check InnoDB Cluster metadata
SELECT * FROM mysql_innodb_cluster_metadata.clusters;
SELECT * FROM mysql_innodb_cluster_metadata.instances;

Warning: Critical Operational Considerations

  1. Be cautious with DDL: ALTER TABLE on large tables can block the entire cluster during Group Replication's certification stage. Use pt-online-schema-change or gh-ost.
  2. Limit large transactions: Transactions exceeding group_replication_transaction_size_limit (default 150MB) will be rolled back. Split bulk INSERT/UPDATE operations into batches.
  3. Network latency: Network round-trip latency between nodes directly impacts write performance. Co-locating nodes within the same data center is recommended; consider ClusterSet for cross-region deployments.
  4. Binary log management: Set binlog_expire_logs_seconds appropriately to manage disk space. The default is 2592000 seconds (30 days).
  5. Run backups on Secondaries: To reduce load on the Primary, perform physical backups (xtrabackup, mysqlbackup) on Secondary nodes.

Failure Scenarios and Recovery Procedures

Scenario 1: Single Secondary Node Failure

This is the most common situation. When one Secondary goes down in a 3-node cluster, the cluster continues normal operation with 2/3 quorum.

// Verify auto-rejoin after the failed node recovers
cluster.status()
// Confirm MEMBER_STATE changes from RECOVERING to ONLINE

// If auto-rejoin fails
cluster.rejoinInstance('clusteradmin@node3:3306')

// If the data gap is large, rejoin via Clone
cluster.removeInstance('clusteradmin@node3:3306', { force: true })
cluster.addInstance('clusteradmin@node3:3306', { recoveryMethod: 'clone' })

Scenario 2: Primary Node Failure

When the Primary goes down, Group Replication automatically elects a new Primary. The node with the higher member_weight is elected first.

// Check the new Primary
cluster.status()

// Rejoin the former Primary as a Secondary after recovery
cluster.rejoinInstance('clusteradmin@node1:3306')

MySQL Router automatically detects the Primary change and routes write traffic to the new Primary. Connection retry logic at the application level is required.

Scenario 3: Quorum Loss (Majority of Nodes Down)

When 2 out of 3 nodes go down simultaneously, the remaining node loses quorum -- it can only serve reads, not writes.

// Force quorum recovery from the surviving node
// Warning: Data loss is possible
cluster.forceQuorumUsingPartitionOf('clusteradmin@node1:3306')

// After recovering the downed nodes, rejoin them
cluster.rejoinInstance('clusteradmin@node2:3306')
cluster.rejoinInstance('clusteradmin@node3:3306')

forceQuorumUsingPartitionOf is a last resort. This command reconstructs the cluster based on the specified node's data, so transactions that were committed on other nodes but not yet propagated may be lost.

Scenario 4: Full Cluster Restart

The order matters when restarting after a planned full shutdown (e.g., infrastructure maintenance).

// 1. Start the node with the most recent GTID first
// Check on each node:
// SELECT @@gtid_executed;

// 2. Reboot the cluster from that node
shell.connect('clusteradmin@node1:3306')
var cluster = dba.rebootClusterFromCompleteOutage('prodCluster')

// 3. Remaining nodes auto-rejoin or manual rejoin
cluster.rejoinInstance('clusteradmin@node2:3306')
cluster.rejoinInstance('clusteradmin@node3:3306')

Query Optimization Strategies

Query optimization in an InnoDB Cluster environment follows the same principles as a single instance, but must additionally consider the characteristics of Group Replication.

Slow Query Analysis Pipeline

-- 1. Enable slow query log
SET GLOBAL slow_query_log = 1;
SET GLOBAL long_query_time = 1;        -- Log queries taking over 1 second
SET GLOBAL log_queries_not_using_indexes = 1;
SET GLOBAL slow_query_log_file = '/var/log/mysql/slow.log';

-- 2. Identify top slow queries via performance_schema
SELECT
  SCHEMA_NAME,
  DIGEST_TEXT,
  COUNT_STAR AS exec_count,
  ROUND(SUM_TIMER_WAIT / 1000000000000, 3) AS total_sec,
  ROUND(AVG_TIMER_WAIT / 1000000000000, 3) AS avg_sec,
  SUM_ROWS_EXAMINED AS rows_examined,
  SUM_ROWS_SENT AS rows_sent,
  ROUND(SUM_ROWS_EXAMINED / NULLIF(SUM_ROWS_SENT, 0), 1) AS exam_to_sent_ratio,
  FIRST_SEEN,
  LAST_SEEN
FROM performance_schema.events_statements_summary_by_digest
WHERE SCHEMA_NAME NOT IN ('mysql', 'performance_schema', 'information_schema', 'sys')
ORDER BY SUM_TIMER_WAIT DESC
LIMIT 20;

An exam_to_sent_ratio (ratio of rows examined to rows returned) of 100 or more is a strong signal that an index needs to be added or the query needs to be rewritten.

# 3. Analyze slow log with pt-query-digest (Percona Toolkit)
pt-query-digest /var/log/mysql/slow.log \
  --limit=20 \
  --order-by=Query_time:sum \
  --output report > /tmp/slow_report.txt

Using EXPLAIN ANALYZE

In MySQL 8.4, EXPLAIN ANALYZE provides execution plans with actual runtime statistics.

EXPLAIN ANALYZE
SELECT o.order_id, o.order_date, c.customer_name, SUM(oi.quantity * oi.unit_price) AS total
FROM orders o
JOIN customers c ON o.customer_id = c.customer_id
JOIN order_items oi ON o.order_id = oi.order_id
WHERE o.order_date BETWEEN '2026-01-01' AND '2026-03-01'
  AND c.region = 'APAC'
GROUP BY o.order_id, o.order_date, c.customer_name
ORDER BY total DESC
LIMIT 50;

Key items to focus on in the output:

  • actual time: Actual elapsed time (from first row returned to last row returned)
  • rows: Actual rows processed (a large difference from estimates means statistics are inaccurate)
  • loops: Number of times that stage was executed repeatedly
  • Table scan / Index scan: Whether a full table scan occurred

Index Strategies

Covering Index

A strategy that includes all columns needed by the query in the index, eliminating access to table data pages.

-- Frequently executed query pattern
SELECT customer_id, order_date, status
FROM orders
WHERE customer_id = 12345
  AND status = 'SHIPPED'
ORDER BY order_date DESC;

-- Create a covering index
CREATE INDEX idx_orders_covering
  ON orders (customer_id, status, order_date DESC, order_id);

-- Verify "Using index" in EXPLAIN output
EXPLAIN SELECT customer_id, order_date, status
FROM orders
WHERE customer_id = 12345
  AND status = 'SHIPPED'
ORDER BY order_date DESC;

Safe Index Management with Invisible Indexes

In MySQL 8.4, you can switch an index to invisible to test the impact before dropping it.

-- Change an index to invisible (optimizer ignores it)
ALTER TABLE orders ALTER INDEX idx_old_index INVISIBLE;

-- Drop it after monitoring for a period with no issues
DROP INDEX idx_old_index ON orders;

-- Restore immediately if problems arise
ALTER TABLE orders ALTER INDEX idx_old_index VISIBLE;

Using Histograms

Leverage MySQL 8.4's automatic histogram updates, with the option to create them manually when needed.

-- Create histograms on columns
ANALYZE TABLE orders UPDATE HISTOGRAM ON status, region WITH 100 BUCKETS;

-- Check histogram information
SELECT
  SCHEMA_NAME, TABLE_NAME, COLUMN_NAME,
  JSON_EXTRACT(HISTOGRAM, '$.\"number-of-buckets-specified\"') AS buckets,
  JSON_EXTRACT(HISTOGRAM, '$.\"sampling-rate\"') AS sampling_rate,
  JSON_EXTRACT(HISTOGRAM, '$.\"histogram-type\"') AS hist_type
FROM information_schema.COLUMN_STATISTICS
WHERE TABLE_NAME = 'orders';

Histograms inform the optimizer about the selectivity of columns without indexes, helping with join order and execution plan decisions. They are particularly effective for low-cardinality columns (status, region, type, etc.).

Group Replication-Specific Optimizations

There are additional optimization points to consider in an InnoDB Cluster environment.

  1. Write optimization: Group Replication certifies all write transactions across the entire group. Smaller transactions mean lower chances of certification conflicts. Split bulk INSERTs into batches of 1,000 to 5,000 rows.

  2. Read distribution: Use MySQL Router's R/O port (6447) to distribute read queries to Secondaries. Note that with consistency: 'BEFORE_ON_PRIMARY_FAILOVER', a brief read delay may occur immediately after failover.

  3. Isolation level in Multi-Primary mode: If using Multi-Primary mode, set transaction_isolation=READ-COMMITTED. REPEATABLE READ can increase certification conflicts in Multi-Primary.

  4. Avoid hotspot tables: Concurrent updates to the same rows cause conflicts during the certification stage. For counter tables, consider distributed processing at the application level (e.g., Redis).

InnoDB Buffer Pool Tuning

The single setting with the greatest impact on query performance is innodb_buffer_pool_size.

-- Check current buffer pool hit rate
SELECT
  (1 - (
    (SELECT VARIABLE_VALUE FROM performance_schema.global_status
     WHERE VARIABLE_NAME = 'Innodb_buffer_pool_reads') /
    (SELECT VARIABLE_VALUE FROM performance_schema.global_status
     WHERE VARIABLE_NAME = 'Innodb_buffer_pool_read_requests')
  )) * 100 AS buffer_pool_hit_rate;

-- Consider increasing buffer_pool_size if under 99%

-- Detailed buffer pool status
SELECT
  POOL_ID,
  POOL_SIZE,
  FREE_BUFFERS,
  DATABASE_PAGES,
  MODIFIED_DB_PAGES,
  PAGES_MADE_YOUNG,
  PAGES_NOT_MADE_YOUNG
FROM information_schema.INNODB_BUFFER_POOL_STATS;

-- Change buffer pool size online (no restart required)
SET GLOBAL innodb_buffer_pool_size = 8 * 1024 * 1024 * 1024;  -- 8GB

The goal is to maintain a buffer pool hit rate of 99% or higher. If the hit rate drops below 95%, disk I/O spikes dramatically, causing sharp degradation in overall query performance.

Operations Checklist

A checklist for daily operations and incident response.

Daily Checks

  • Verify all nodes are ONLINE via cluster.status()
  • Check that COUNT_TRANSACTIONS_IN_QUEUE in replication_group_member_stats is not abnormally high (alert if over 100)
  • Check for new patterns in the slow query log
  • Check disk usage (binary logs, relay logs, data directory)
  • Verify buffer pool hit rate stays at 99% or above

Weekly Checks

  • Analyze slow query trends with pt-query-digest
  • Check for unused indexes (sys.schema_unused_indexes)
  • Check for redundant indexes (sys.schema_redundant_indexes)
  • Update table statistics (ANALYZE TABLE)
  • Check binary log purge status

Incident Response Preparation

  • Ensure the entire team is familiar with the forceQuorumUsingPartitionOf procedure
  • Document the rebootClusterFromCompleteOutage execution order (most recent GTID node first)
  • Be familiar with the MySQL Router re-bootstrap procedure
  • Verify physical backups (xtrabackup) are performed at least daily on Secondaries
  • Rehearse recovery procedures aligned with Recovery Time Objective (RTO)

Pre-Upgrade Checks

  • Validate compatibility with util.checkForServerUpgrade()
  • Confirm removal of mysql_native_password dependencies
  • Search for and replace code using MASTER/SLAVE terminology
  • Replace mysqlpump scripts with mysqldump or MySQL Shell dump
  • Check for non-unique key FK usage (prepare for restrict_fk_on_non_standard_key)
  • Rolling Upgrade order: Secondary -> Secondary -> Primary (after switchover)

Performance Benchmark Reference Values

Here are approximate performance expectations in an 8.4 LTS environment. These vary significantly based on hardware and workload, so use them only as reference.

ConfigurationSingle InstanceInnoDB Cluster 3-Node (Single-Primary)
Write TPS (sysbench oltp_write_only)~15,000~10,000-12,000 (certification overhead)
Read QPS (sysbench oltp_read_only)~50,000~120,000 (sum of 3 nodes, R/O distributed)
Failover TimeN/A5-30 seconds (expelTimeout + election time)
Clone Recovery Time (100GB data)N/A10-30 minutes (depends on network bandwidth)

It is normal for write TPS to decrease by 20-30% compared to a single node due to Group Replication certification overhead. Read performance can scale near-linearly by adding nodes.

Conclusion

MySQL 8.4 InnoDB Cluster is a solution with a well-balanced trade-off between operational complexity and availability. Three things are key.

First, fully satisfy all prerequisites (InnoDB engine, PK, GTID) before cluster setup. Second, understand the characteristics of Group Replication (certification-based consensus) and control transaction sizes. Third, focus on optimizing the top 20% of problem queries following the 80/20 rule through performance_schema and slow query analysis.

If you are considering an upgrade from 8.0 to 8.4, it is recommended to first remove mysql_native_password dependencies and replace MASTER/SLAVE terminology, then proceed with a Rolling Upgrade approach.


References

  1. MySQL 8.4 Release Notes - Complete release notes for MySQL 8.4 LTS
  2. MySQL 8.4 Reference Manual - What Is New in MySQL 8.4 - Official documentation of changes from 8.0
  3. MySQL Shell 9.5 - InnoDB Cluster - Official InnoDB Cluster guide
  4. MySQL 8.4 Reference Manual - Group Replication - Official Group Replication reference
  5. MySQL 8.4 Reference Manual - Optimizing SELECT Statements - Official query optimization guide
  6. MySQL 8.4 - InnoDB vs NDB Cluster Comparison - InnoDB and NDB storage engine comparison
  7. Percona - InnoDB Cluster Setup: Building a 3-Node HA Architecture - Practical setup guide
  8. MySQL Replication Best Practices (Percona, 2025) - Replication operations best practices