Scaling Drupal stack with Galera: part 1
Submitted by alex on Sat, 04/11/2009 - 15:17
Drupal is a widely used content management system written in PHP that uses SQL server as a storage backend. Although Drupal can work with PostgreSQL, Apache/Drupal/MySQL stack is the usual production configuration.
A number of strategies were developed to scale Drupal performance through clustering, see excellent series of articles by John Quinn. Naturally all of them were capped by the SQL server component. Perhaps Galera is the "holy grail" of Drupal scaling-out?
Drupal 6 core SQL load has the following nice qualities:
- It uses AUTOCOMMIT mode (read: single-statement transactions).
- It is very read-intensive.
- It does not lock tables.
- It does not use a sequence table.
As such it seems to be a perfect application to be used with MySQL/Galera cluster and was the first real life application in line to test Galera with. Unfortunately our first demo release... Well, it turned out it just didn't work with real life autocommit load: we missed some key semantics of autocommit queries. Demo2 was to fix all that and more. And it did, but then we ran into a peculiar combination of an obscure MySQL bug and a less obscure Drupal bug, which fired way too often in our Amazon EC2 environment. Finally, since we are patching MySQL anyways, we implemented a workaround for the bug in MySQL (That's right — wsrep MySQL patch fixes Drupal bug among other cool things it does!), and now we can tell my Galera success story with Drupal: it works!
While the first step in existing Drupal clustering strategies is putting Apache and MySQL servers on different machines, I went with something that is impossible without synchronous replication: clustering the whole Apache/Drupal/MySQL stack. That means that each node in the cluster runs both Apache and MySQL servers and nodes can be completely identical. This approach has a number of advantages:
- It is very simple — just N identically configured machines. In EC2 a single AMI can be used to spawn all cluster nodes.
- Each node is a self-contained website server, so even one node can serve the whole site.
- One HTTP request to Drupal site results in several (if not dozens) SQL requests. It is faster to perform them locally rather than over the network.
- Available CPU time is optimally divided between Apache and MySQL, whereas in the case of server separation, normally one will be overloaded and another underloaded.
Of course, Drupal and MySQL can be clustered separately (MySQL cluster acting as a virtual server for Drupal cluster), and for the more sophisticated sites it might be the preferable approach, but for the purpose of this article it is just not as fancy.
So for the benchmarking I used 1-4 node clusters of small and large EC2 instances. Node configuration is trivial — just a regular Drupal/MySQL server. MySQL servers are clustered using Galera. HTTP load balancing by GLB.
Internal Drupal cache has to be disabled for any sort of replication since a single Drupal server cannot track all changes to the site and can't properly invalidate outdated cache entries. Incidentally it also has one desired effect: how do you emulate normal website activity with a simple benchmark? Obviously the JMeter script that I made is going to hit the same pages, using the same logins over and over again. One way to alleviate this and pretend that different pages are hit is to disable internal Drupal cache, forcing it to regenerate pages as if they are hit for the first time. Thus I assume disabling cache to provide more realistic figures.
Benchmarking Drupal cluster
It turned out to be not so simple.
To begin with, how do you generate load for Drupal? I could not find any Drupal benchmarks so I followed another helpful article by John Quinn and created a JMeter load profile. It consists of three thread groups:
- posters — the guys who post articles.
- commenters — the guys who read and comment on articles.
- browsers — the guys who just read articles and comments.
Now, unlike SQL benchmark, Web benchmark just can't fire HTTP requests as fast as it can through some arbitrary number of connections. There are certain intervals between human clicks in the browser and that has to be reflected in the load. Thankfully JMeter allows to insert delays between requests. I used Gaussian timers with mean of 15 seconds and dispersion of 7. What does this mean? That 3-4 users just can't saturate the server. By trial it was found that a combination of 4 posters, 12 commenters and 24 browsers (total: 40 users) can barely saturate a single server running on small EC2 instance. For the rest of this article I'll be using these proportions, only changing the total number of users: 40, 80, 120, etc. For example large EC2 instance is saturated by 240 users. This is radically different from SQL benchmarking, where the number of connections is chosen so as to provide the highest score, while CPU utilization is almost always at 100%.
Next, is saturation or the highest HTTP request rate a metric we want to measure? The subjective speed of the website, as perceived by the user, is determined by the server response times, i.e. latency. A site that serves 10000 users at 3 seconds per page is perceived to be slower than the site serving 10 users at 1 second per page. And the problem here is that on the underloaded server latencies depend mostly on the speed of the CPU and network, because concurrency is small and there's always spare CPU time. On the saturated server latencies become to be proportional to a number of concurrent users and can rapidly become prohibitively huge. The following figure shows how throughput and latency depend on the number of concurrent users (EC2 small instance, Drupal 6.10, official MySQL 5.1.32 binary):
As can be seen, HTTP load is highly inelastic: it is either throughput proportional to the number of users and latency roughly constant, or throughput is constant and latency proportional to the number of users. At 100 users Apache goes on an error spree and the server is no longer usable.
As a result page generation latency turns to be a not so straightforward metric of a web-server capability.
And finally, what does server saturation (no idle CPU cycles) mean in case of Apache? 40 users on a small EC2 instance mean on average about 20 Apache processes competing for CPU. This is not that bad yet, but 80 users already will bring the server to a crawl (21 seconds per page generation, 30 seconds to launch 'top' command). Thus, while 80 users load will just saturate a cluster of 2 small EC2 instances, it'll practically kill a single node and leave CPU about 30% idle on a 3-node cluster.
The above 2 considerations make it almost impossible to meaningfully benchmark different Drupal clusters with the same number of users to determine Drupal/Galera cluster scalability. So in the following I adopted two approaches:
- Benchmark the cluster of 1-4 nodes with the load that saturates a single node. This will give an impression of how much latency improves with the same number of users.
- Benchmark the cluster of 1-4 nodes increasing the load proportionally to the number of nodes. This will show how cluster throughput capability grows when adding more nodes.
Throughput scalability, EC2 small instances:
Nodes Users Request rate Latency Error rate (req/min) (ms) (%) --------------------------------------------------- 1 40 129 3950 0.07 2 80 259 3960 0.06 3 120 387 3700 0.05 4 160 514 3490 0.12
Throughput scalability of Drupal/Galera cluster is surprisingly linear. This is not so surprising though, because at least in this benchmark Apache server was the clear performance bottleneck. After all, when page generation takes almost 4 seconds, what is few tens milliseconds of synchronization overhead? Unfortunately this is a dubious success: it does show that Drupal works well with Galera, however nobody would like to operate his site at this load level. 4 second latencies is a clear overload.
Let's see how the cluster behaves at the moderate load level.
Latency, EC2 small instances:
Nodes Users Request rate Latency Error rate (req/min) (ms) (%) --------------------------------------------------- 1 40 129 3950 0.07 2 40 172 566 0.02 3 40 177 447 0.08 4 40 179 437 0.04
That's what I call inelastic: you just add one node and your latencies drop sevenfold. Performance-wise there is no point to add more nodes — you won't notice any improvement. From the end-user perspective this may not be that bad, indeed, but it makes benchmarking life harder ;)
EC2 large instances
We all know that small EC2 instances are seriously handicapped by Xen. For Drupal a large EC2 instance is approximately 6 times faster than a small one (at 4 times the cost) and any real-life website should be hosted on nothing less than it. However I encountered an embarrassing setback when benchmarking: GLB failed to handle 960 simultaneous connections needed to saturate the 4-node cluster. The second part of the article presenting results of the large node Drupal cluster will follow shortly after GLB fix.
The main issue was a problem of spurious transient HTTP 403,404,503 errors. On one hand they happen with a standalone Drupal too — when it is saturated. On the other hand, they happen in Drupal cluster even when it is underloaded and there is no clear correlation between the error rate and any other parameter. The cause of these errors remains a question. Some of them might be due to source-agnostic GLB load balancing and directing HTTP requests from one user session to different nodes. Perhaps using more sophisticated load balancer (like Apache) would solve it. However they are fairly rare (approx. 0.05% rate — 1 in 2000) and don't reflect the actual state of server, so for now I chose to ignore them.
Another issue is the Drupal
files directory where Drupal stores user files and some of its JS and CSS files. Technically it is a part of the application state which is not stored in the database and is not replicated as a result. It should not be difficult to write a Drupal module which would store these files in the database in the form of blobs. Meanwhile (and that's what most people are doing) administrator can place this directory on a network share
and share it between the cluster nodes.
Stay tuned. In the second article we'll see how well Drupal/Galera combo performs on large EC2 instances and explore further the problem of Drupal cluster elasticity.