Introduction

Percona XtraDB Cluster is a fully open-source high-availability solution for MySQL. It integrates Percona Server and Percona XtraBackup with the Galera library to enable synchronous multi-source replication.

A cluster consists of nodes, where each node contains the same set of data synchronized accross nodes. The recommended configuration is to have at least 3 nodes, but you can have 2 nodes as well. Each node is a regular MySQL Server instance (for example, Percona Server). You can convert an existing MySQL Server instance to a node and run the cluster using this node as a base. You can also detach any node from the cluster and use it as a regular MySQL Server instance.

cluster-diagram1

Benefits:

  • When you execute a query, it is executed locally on the node. All data is available locally, no need for remote access.
  • No central management. You can loose any node at any point of time, and the cluster will continue to function without any data loss.
  • Good solution for scaling a read workload. You can put read queries to any of the nodes.

Drawbacks:

  • Overhead of provisioning new node. When you add a new node, it has to copy the full data set from one of existing nodes. If it is 100GB, it copies 100GB.
  • This can’t be used as an effective write scaling solution. There might be some improvements in write throughput when you run write traffic to 2 nodes versus all traffic to 1 node, but you can’t expect a lot. All writes still have to go on all nodes.
  • You have several duplicates of the data, for 3 nodes you have 3 duplicates.

Components

Percona XtraDB Cluster is based on Percona Server running with the XtraDB storage engine. It uses the Galera library, which is an implementation of the write set replication (wsrep) API developed by Codership Oy. The default and recommended data transfer method is via Percona XtraBackup.

Install Guide

Environmental information

  • MySQL版本:percona-xtradb-cluster-57 (5.7.36-31.55-1.xenial amd64)
Node Host Name IP Flavor OS Distro
Node 1 pxc1 192.168.0.11 4C8G,100G数据盘 Ubuntu 16.04.7 LTS (GNU/Linux 4.4.0-210-generic x86_64)
Node 2 pxc2 192.168.0.12 4C8G,100G数据盘 Ubuntu 16.04.7 LTS (GNU/Linux 4.4.0-210-generic x86_64)
Node 3 pxc3 192.168.0.13 4C8G,100G数据盘 Ubuntu 16.04.7 LTS (GNU/Linux 4.4.0-210-generic x86_64)

Prerequisites

  • You need to have root access on the node where you will be installing Percona XtraDB Cluster (either logged in as a user with root privileges or be able to run commands with sudo).

  • Make sure that the following ports are not blocked by firewall or used by other software. Percona XtraDB Cluster requires them for communication.

    • 3306
    • 4444
    • 4567
    • 4568

Installing from Repository

  1. Update the sytem:

    1
    sudo apt update
  2. Install the necessary packages:

    1
    sudo apt install -y wget gnupg2 lsb-release curl
  3. Download the repository package

    1
    wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
  4. Install the package with dpkg:

    1
    sudo dpkg -i percona-release_latest.generic_all.deb
  5. Refresh the local cache to update the package information:

    1
    sudo apt update
  6. If need, enable the release repository for Percona XtraDB Cluster (optional):

    1
    sudo percona-release setup pxc80
  7. Install the cluster:

    1
    sudo apt install -y percona-xtradb-cluster-57

Configuring Nodes for Write-Set Replication

  1. Stop the Percona XtraDB Cluster server. After the installation completes the server is not started. You need this step if you have started the server manually.

    1
    $ sudo service mysql stop
  2. Edit the configuration file of the first node to provide the cluster settings.

    If you use Debian or Ubuntu, edit /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    # Path to Galera library
    wsrep_provider=/usr/lib/galera3/libgalera_smm.so

    # Cluster name
    wsrep_cluster_name=pxc-cluster

    # Cluster connection URL contains IPs of nodes
    #If no IP is found, this implies that a new cluster needs to be created,
    #in order to do that you need to bootstrap this node
    wsrep_cluster_address=gcomm://192.168.0.11,192.168.0.12,192.168.0.13

    Configure node 1.

    1
    2
    3
    4
    5
    6
    7
    8
    #If wsrep_node_name is not specified,  then system hostname will be used
    wsrep_node_name=pxc1

    # Node IP address
    wsrep_node_address=192.168.0.11

    #pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
    pxc_strict_mode=ENFORCING
  3. Set up node 2 and node 3 in the same way: Stop the server and update the configuration file applicable to your system. All settings are the same except for wsrep_node_name and wsrep_node_address.

    • For node 2

      wsrep_node_name=pxc2 wsrep_node_address=192.168.0.12

    • For node 3

      wsrep_node_name=pxc3 wsrep_node_address=192.168.0.13

  4. Set user information for synchronization. Only the first node is set.

    1
    2
    3
    4
    5
    # SST method
    wsrep_sst_method=xtrabackup-v2

    #Authentication for SST method
    wsrep_sst_auth="sstuser:123456"
  5. Set up the traffic encryption settings. Each node of the cluster must use the same SSL certificates (optional).

    1
    2
    3
    4
    5
    6
    7
    8
    [mysqld]
    wsrep_provider_options=”socket.ssl_key=server-key.pem;socket.ssl_cert=server-cert.pem;socket.ssl_ca=ca.pem”

    [sst]
    encrypt=4
    ssl-key=server-key.pem
    ssl-ca=ca.pem
    ssl-cert=server-cert.pem

Bootstrapping the First Node

After you configure all PXC nodes, initialize the cluster by bootstrapping the first node. The initial node must contain all the data that you want to be replicated to other nodes.

Bootstrapping implies starting the first node without any known cluster addresses: if the wsrep_cluster_address variable is empty, Percona XtraDB Cluster assumes that this is the first node and initializes the cluster.

Instead of changing the configuration, start the first node using the following command:

1
root@pxc1:~# /etc/init.d/mysql bootstrap-pxc

After startup, you need to manually create a synchronization user (this process does not need to be executed after version 8.0)

1
2
3
mysql> create user 'sstuser'@'localhost' identified by '123456';
mysql> grant reload, lock tables, replication client, process on *.* to 'sstuser'@'localhost';
mysql> flush privileges;

When you start the node using the previous command, it runs in bootstrap mode with wsrep_cluster_address=gcomm://. This tells the node to initialize the cluster with wsrep_cluster_conf_id set to 1. After you add other nodes to the cluster, you can then restart this node as normal, and it will use standard configuration again.

To make sure that the cluster has been initialized, run the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |
| ... | ... |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| ... | ... |
| wsrep_cluster_size | 1 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| ... | ... |
| wsrep_ready | ON |
+----------------------------+--------------------------------------+
40 rows in set (0.01 sec)

The previous output shows that the cluster size is 1 node, it is the primary component, the node is in the Synced state, it is fully connected and ready for write-set replication.

Adding Nodes to Cluster

Start the second node using the following command:

1
root@pxc2:~# systemctl start mysql

After the server starts, it receives SST automatically.

To check the status of the second node, run the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
mysql> show status like 'wsrep%';
+----------------------------------+--------------------------------------------------+
| Variable_name | Value |
+----------------------------------+--------------------------------------------------+
| wsrep_local_state_uuid | a08247c1-5807-11ea-b285-e3a50c8efb41 |
| ... | ... |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| ... | |
| wsrep_cluster_size | 2 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| ... | ... |
| wsrep_provider_capabilities | :MULTI_MASTER:CERTIFICATION: ... |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 3.55(r8b6416d) |
| wsrep_ready | ON |
| ... | ... |
+----------------------------------+--------------------------------------------------+
75 rows in set (0.00 sec)

The output of SHOW STATUS shows that the new node has been successfully added to the cluster. The cluster size is now 2 nodes, it is the primary component, and it is fully connected and ready to receive write-set replication.

If the state of the second node is Synced as in the previous example, then the node received full SST is synchronized with the cluster, and you can proceed to add the next node.

Note

  • MySQL root@localhost 用户需要密码为空。

参考文档