Architect a Server Cluster

Scaling Directions

You have two directions when it comes to scaling your environment. Scaling up, sometimes referred to as scaling vertically, means that you increase the size of your server by adding additional CPUs and memory. The other direction is to scale out, commonly described as scaling horizontally, which is to add additional servers to your environment.

Scale Up Scale Out

Components of a Rock Cluster

There are four major components of a Rock cluster. The following diagram outlines these components with further details below.

Components of a Rock Cluster

  1. Application Gateway - The application gateway’s primary job is to direct traffic to the web server cluster. It ensures that requests are load balanced over the various servers in the cluster. It also ensures that if a server goes down, that traffic is re-directed to other servers in the farm. The type and configuration of these gateways will vary depending on your cloud provider. For those using Azure, we recommend using the Azure App Gateway. One important configuration for the app gateway is that it should have session affinity enabled. This means that traffic for a specific client will always return to the same server. This is important, as Rock’s check-in uses session state.
  2. Web Server Cluster - This is the group of web application servers that are participating in the cluster. More details on the configuration of these servers can be found below.
  3. Database - The database in a clustered environment isn’t that different than one in a non-clustered environment. Some organizations choose to enable a database cluster, but this isn’t required. The specifics of setting up your database will be unique to your cloud host. Whether your database is clustered or not is transparent to Rock.
  4. Message Bus - The message bus is what allows the various Rock web servers to communicate to other nodes in the cluster. The details of these conversations are discussed below. Two bus services are currently supported by Rock. They are:
    1. Azure Message Bus
    2. Rabbit MQ

Note

Redis Server
Previous versions of Rock clusters relied on a Redis server. The message bus now replaces the need for the Redis server. We highly recommend moving away from the Redis solution to the new web farm features. Support for Redis clusters is deprecated as of Rock v13.

Benefits of Clusters

Server clusters have two primary benefits: increased scale and added redundancy. Let’s consider each in turn.

Clusters obviously provide additional computing resources for supporting your digital strategy. Modern cloud environments have no problem providing large servers, but when hosting web applications, at a certain point the law of diminishing returns kicks in. In a Rock environment, CPU is often the constraint. Memory, though important, usually isn’t nearly as limiting a factor as the CPU.

The second benefit of clustering is redundancy. While outages are rare and quickly recovered in a cloud environment, having multiple servers provides a failover capability. While this is a great benefit, be careful. The added complexity of a clustered environment can actually reduce the up-time of your application if it isn’t well architected and managed by a team who understands the new infrastructure and services required.

Though not a primary benefit, some organizations use clusters to partition their traffic. This allows dedicated servers for traffic to support things like the internal portal, external websites, check-in and API hosting. By providing dedicated resources for these services, you can limit the spill-over of congestion from one service to another.

What Makes Clusters Difficult

Implementing a Rock cluster drastically increases the complexity of your environment. Below are several factors you should consider before moving forward with a cluster project.

  1. Increase Infrastructure Knowledge: As you’ve seen, there are several new infrastructure components required to run a Rock cluster. Components like application gateways can be difficult to set up and support. Be sure that your team has the necessary engineering knowledge to support these environments now and in the future.
  2. Rock Updates: Updating Rock is more difficult in a clustered environment. Currently, each server will need to be updated manually (or through custom dev ops automation).
  3. Rock File Types: In a clustered environment, it’s important that all Rock file types are cloud or database backed. If you use the ‘File System’ type, files will only be available on a single node in the cluster.
  4. File System Sync: Besides file types there are other opportunities for files to be uploaded to a single node. For example, the HTML editor allows files to be uploaded to the ~/Content directory of a node. These files will appear as broken links to clients pointed at other web application nodes in the cluster. There are tools you can configure to sync these directories, but they represent another service to configure and maintain.
  5. TLS Complexity: In a clustered environment, we recommend you use a provisioned wildcard digital certificate instead of an ACME certificate. This represents an additional cost, and you’ll need to remember to renew it.
  6. Limited Partner Support: There are a limited number of Rock Partners with familiarity with Rock’s web farm features.

Recommendations for Rock Clusters

As you consider the design of your Rock cluster, there are several principles you should consider. Below are our recommendations to consider as you plan your implementation.

  1. The Best Cluster is No Cluster: Simply put, server clusters are difficult and add complexity to the environment.
  2. Prefer Fewer Larger Servers: As you plan your cluster, we recommend having fewer large nodes than more smaller nodes. The fewer nodes you have the less complexity you have.
  3. Scale Up Not Out: If you would like to add additional resources for critical timeframes, we recommend you scale up the existing servers in your cluster rather than adding new nodes to the cluster. Adding new nodes requires complex dev ops to create node templates and deploy them as needed, whereas increasing the size of a running node is typically much easier using existing tools provided by your cloud service.