This article may not be up to date in certain areas. We highly recommend referring to the CrateDB documentation and the tutorial Run CrateDB on Kubernetes to ensure that you have the most current and accurate information that will help you to successfully deploy CrateDB on Kubernetes.
In part one of this miniseries, I introduced the basic concepts that underpin Kubernetes. I then provide a step-by-step guide that shows you how to get Kubernetes running on your local machine with Minikube, and how to configure and start a simple three-node CrateDB cluster on top of Kubernetes.
To simplify the initial setup, the configuration I provided uses volatile storage (i.e., RAM) for retaining database state. Which is okay for a first-run, but ideally, we want our test data to be able to survive a power cycle.
In this post, I address this shortcoming by showing you how to configure persistent (i.e., non-volatile) storage. I then show you to scale your CrateDB cluster.
Get CrateDB Running Again
If you followed the instructions in part one, you should have a working Kubernetes configuration in place. However, if some time has passed since you did that, there is a good chance that Minikube is no longer running.
No need to fret! It's straightforward to get things going again.
Run the following command:
$ minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
This starts the virtual machine and gets Kubernetes, and by extension, CrateDB, running again. You can verify this, like so:
$ kubectl get service --namespace crate
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
crate-external-service LoadBalancer 10.99.199.216 <pending> 4200:32406/TCP,5432:30773/TCP 2d
crate-internal-service ClusterIP 10.110.194.55 <none> 4300/TCP 2d
Like last time, we're interested in port 32406, because this is the port that Kubernetes tells us has been mapped to 4200. Port 4200 is what CrateDB uses for the HTTP API as well as the Admin UI.
Once again, because of the way Minikube provides the load balancer, we must ask Minikube directly for the external IP address, like so:
$ minikube service list --namespace crate
|-----------|------------------------|--------------------------------|
| NAMESPACE | NAME | URL |
|-----------|------------------------|--------------------------------|
| crate | crate-external-service | http://192.168.99.100:32406 |
| | | http://192.168.99.100:30773 |
| crate | crate-internal-service | No node port |
|-----------|------------------------|--------------------------------|
From this, you can fish out the address to use to access the Admin UI by matching it up with the port number we're interested in from the previous command output. In this example, we're interested in 92.168.99.100:32406
, but your address is probably different.
Plug this address into your web browser, and hey presto:
Now, let's make some changes!
Configure Persistent Storage
You should have a file named crate-controller.yaml
with the following section near the bottom:
volumes:
# Use a RAM drive for storage which is fine for testing, but must
# not be used for production setups!
- name: data
emptyDir:
medium: "Memory"
We're not setting up a production cluster, but it would still be nice to retain our test data between power cycles. So let's improve this by moving away from RAM.
To create storage that persists to disk, we need to use a persistent volume. We can do this with a persistent volume claim.
As the name suggests, a persistent volume claim instructs Kubernetes to request some storage from the underlying infrastructure. Kubernetes is agnostic as to the implementation details.
Here's a new configuration that requests a 1GB persistent volume per pod:
volumeClaimTemplates:
# Use persistent storage.
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
You can choose a different volume size if you want.
To get this working, open the configuration file, delete the volumes
section above, and replace it with the new volumeClaimTemplates
section. (Note that unlike the volumes
section, the volumeClaimTemplates
section should be at the same indentation level as serviceName: "crate-set"
)
We cannot update the existing pods with this new configuration, because we are changing their storage devices. So when you make this change, you will lose any data you have already written to CrateDB.
You can delete and then recreate the controller, like so:
$ kubectl replace --force -f crate-controller.yaml --namespace crate
statefulset.apps "crate" deleted
statefulset.apps/crate replaced
You can verify this worked by asking Kubernetes to list the persistent volume claims with the following command:
$ kubectl get pvc --namespace crate
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-crate-0 Bound pvc-281c14ef-a47e-11e8-a3df-080027220281 1Gi RWO standard 3m
data-crate-1 Bound pvc-53ec50e1-a47e-11e8-a3df-080027220281 1Gi RWO standard 2m
data-crate-2 Bound pvc-56d3433e-a47e-11e8-a3df-080027220281 1Gi RWO standard 2m
As you can see, we now have three 1GB persistent storage volumes. Each volume is bound to a specific pod.
Congratulations! Your data can now survive a power cycle. :)
In the next two sections, I will show you how to scale CrateDB. Before you continue, it might be a good idea to import some test data so that you can see how CrateDB handles things as the size of the cluster is changed.
Scale Out to Five Nodes
One of the benefits of running CrateDB on a container orchestrator like Kubernetes is the ease with which you can scale a cluster out and in to suit your requirements.
The way that scaling works with Kubernetes is that you increase or decrease the configured number of replicas.
Recall that in our controller configuration, we specify three replicas:
# Our cluster has three nodes.
replicas: 3
Let's scale this out to five.
You can change the number of replicas while the cluster is running. This might be useful to address a load spike quickly. However, that's not ideal for a permanent change, and CrateDB will warn you about this via the Admin UI.
So, open the crate-controller.yaml
file again.
Increase the number of replicas to five, like so:
# Our cluster has five nodes.
replicas: 5
Change EXPECTED_NODES too:
- name: EXPECTED_NODES
value: "5"
Now we have to update the CrateDB configuration to match.
Both minimum_master_nodes and recover_after_nodes should be at least half the cluster size plus one.
So, edit the command section and increase them to three:
command:
- /docker-entrypoint.sh
- -Ccluster.name=${CLUSTER_NAME}
- -Cdiscovery.zen.minimum_master_nodes=3
- -Cdiscovery.zen.hosts_provider=srv
- -Cdiscovery.srv.query=_crate-internal._tcp.crate-internal-service.${NAMESPACE}.svc.cluster.local
- -Cgateway.recover_after_nodes=3
- -Cgateway.expected_nodes=${EXPECTED_NODES}
- -Cpath.data=/data
Save your edits.
This time, because we're only changing replicas
and the containers
section, we can update our controller configuration in place.
Run the following command:
$ kubectl replace -f crate-controller.yaml --namespace crate
statefulset.apps/crate replaced
As before, you can monitor the progress of this scaling action with the kubectl
command.
You will see that Kubernetes terminates your already running pods one by one. No need to fret. Kubernetes has to terminate them to update the configuration. When Kubernetes starts them again, they retain their identity and storage.
Eventually, you should see something that looks like this:
$ kubectl get pods --namespace crate
NAME READY STATUS RESTARTS AGE
crate-0 1/1 Running 0 11m
crate-1 1/1 Running 0 11m
crate-2 1/1 Running 0 10m
crate-3 1/1 Running 0 2m
crate-4 1/1 Running 0 2m
Bring up the Admin UI again, and navigate to the cluster browser using the left-hand navigation bar. You should see something like this:
Congratulations! Now you have a five node cluster.
Scale in by One Node
Let's scale the cluster back in by one node so that we can see how CrateDB handles this sort of operation.
For CrateDB, there is no difference scaling in by one node and a node unexpectedly dropping out of the cluster due to failure. In both cases, a node is removed from the cluster, and CrateDB handles the rest automatically.
If you haven't already imported some test data, you should do that before continuing.
Once you have some test data, edit the controller configuration and change replicas
and EXPECTED_NODES
to 4
Leave minimum_master_nodes
and recover_after_nodes
as they are.
Update your controller configuration again:
$ kubectl replace -f crate-controller.yaml --namespace crate
statefulset.apps/crate replaced
Kubernetes will start making the changes to your cluster.
Changes are always made to pods one by one. Depending on which specific CrateDB pod responds to your browser requests, you may see some different things as Kubernetes makes its changes.
While the cluster configuration is half rolled out (i.e., in an inconsistent state), you will see some checks fail.
When replication is configured (on by default), CrateDB can heal itself by recreating missing shards when nodes are lost. While this process is ongoing, you will see warnings about under-replicated records, like this:
Once the cluster has settled, things should turn green again:
Great! The scale in operation was a success.
Wrap Up
In this post, we built upon the work we did to create a simple CrateDB cluster on Kubernetes in part one of this miniseries.
First, we configured persistent storage, so that CrateDB data can survive a power cycle. With this in place, we scaled out CrateDB cluster out to five nodes. Then, we scaled it back in by one node and saw how CrateDB recovers from node loss.