Table of Contents
MySQL/Galera cluster upgrade
This article discusses a situation where you might need to update the cluster software. Rolling upgrade is thought to be the alpha and omega of cluster software upgrade, but we'll looks at other possibilities too.
1. Rolling Upgrade
This is a type of upgrade everybody is talking about when they talk about “High Availability”. It is supposed to provide uninterruptible service during upgrade. The idea is that you
- shutdown one node,
- upgrade software,
- restart the node,
- wait for it to sync with cluster,
- repeat steps 1-4 with the next node until all nodes are done.
The main advantage of this method is that if something goes wrong with upgrade, the other nodes are still working so you have time to sort it out.
However this method has some issues which deserve consideration:
- Upgrading individual node in this manner can take considerable time, during all the time the cluster will operate at a lower capacity:
- until incremental state transfer becomes available for Galera, the node will have to resort to full state snapshot transfer which can take a very long time (depending on the database size and state transfer method)
- during that time the node will accumulate a very long catch-up replication event queue which it will have to replay to sync with the cluster. Meanwhile, the ongoing cluster operation will be adding more and more events to the queue.
- Unless xtrabackup or rsync+LVM state transfer methods are used, state snapshot donor node will be also blocked for the duration of state transfer. Even though xtrabackup or rsync+LVM state transfer won't block the donor, it may considerably slow it down. So for practical purposes the cluster will lack 2 nodes for the duration of state transfer and 1 node for the duration of catch-up phase.
- If there are few nodes in the cluster and it operates close to its maximum capacity, taking out 2 nodes can lead to cluster being unable to serve all requests or execution times may increase, making the service less available.
- If there are many nodes in the cluster, it would just take a long time to upgrade the whole cluster.
- Depending on load balancing mechanism, it might be necessary to instruct it not to direct requests to the joining and donating nodes.
- Every time a new node is joining a cluster, cluster performance will drop in order to keep pace with it until the node buffer pool warms up. Parallel applying helps with it though.
In the end the availability of the cluster during rolling upgrade may be not as high as expected.
2. Bulk Upgrade
The idea behind this upgrade is to upgrade all nodes in the idle cluster in order to avoid time consuming state transfer. However it produces a very short but complete service outage.
- Stop all load on cluster (it is important to do it as the first step!)
- Shut down all the nodes.
- Upgrade software.
- Restart the nodes. This time they will merge to cluster without state transfer, in a matter of seconds.
- Resume the load on cluster.
Operations 2-3-4 can be done on all nodes in parallel, therefore (when properly scripted) reducing the service outage time to virtually the time needed for a single server restart.
The main advantage or this method is that for huge databases it may be much faster and result in better availability than the rolling upgrade.
Always use this method for 2-node cluster upgrade as rolling upgrade with blocking state transfers would result in a much longer service outage.
The main drawback with this method is that it relies on that upgrade and restart will be very quick. However shutting down InnoDB may take up a few minutes (flushing dirty pages), and if something goes wrong during upgrade, there is very little time to fix it. Therefore it may be advisable to not upgrade all nodes at once but first try it on a single node.
3. Provider-only Upgrade
If only Galera provider upgrade is required, bulk upgrade method may be further optimized to only take few seconds. The following is an example for 64-bit CentOS (or RHEL):
- On all nodes:
# rpm -e galera # rpm -i <new galera rpm>
- Stop load on cluster.
- On all nodes:
mysql> SET GLOBAL wsrep_provider='none'; mysql> SET GLOBAL wsrep_provider='/usr/lib64/galera/libgalera_smm.so';
- On one of the nodes (node1):
mysql> SET GLOBAL wsrep_cluster_address='gcomm://'
- On the rest of the nodes:
mysql> SET GLOBAL wsrep_cluster_address='gcomm://node1'
- Resume load on cluster.
Normally reloading provider and connecting to cluster should take less than 10 seconds, so there is virtually no service outage.
But the most important feature of this method is that warmed up InnoDB buffer pool is fully preserved, so that the cluster will pick up to operate at full speed as soon as the load is resumed.