Live Stream: Turbocharge your aggregations, search & AI models & get real-time insights

Register now
Skip to content
Blog

How we orchestrated a SQL database with Docker Swarm Mode

This article is more than 4 years old

Crate is a distributed SQL database and with its shared-nothing architecture, Crate is a perfect fit for setting up a container-based cluster for microservices. While there are many orchestration tools out there, Docker - with their latest update to Docker engine 1.12 - blended their applications nicely together and managing large clusters of containerized applications becomes a breeze!

Our initial post about Docker 1.12 RC is already a few weeks old and due to a couple of differences in the final release in areas like the overlay networking, we felt that we should explain how to use the new features to add run a Crate cluster. Docker 1.12 brings a lot of changes including native clients for Mac and Windows, but most usefully for Crate users, in-built orchestration with a new Swarm mode.

Prerequisites

To run a Crate cluster, you only need three things:

Having that handy will provide a solid starting point for creating a Crate cluster.

Setup

We are using 3 type-0 machines from our friends at Packet.net. To provision those (CentOS 7) machines, we have docker-machine take care of installing the Docker daemon by issuing this command on your development machine (for each of the nodes you want to provision). Be sure to replace with your actual IPs, crate-sw-X is the name of each node.

$ docker-machine create -d generic —generic-ip-address <MANAGER-IP> —generic-ssh-user root crate-sw-1
Running pre-create checks...
Creating machine...
(crate-sw-1) No SSH key specified. Assuming an existing key at the default location.
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with centos...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env crate-sw-1

With Docker installed on every machine, we now have to join them into a swarm, but first let's check if all the machines are available:

$ docker-machine ls
NAME         ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
crate-sw-1   -        generic      Running   tcp://<IP1>:2376           v1.12.1
crate-sw-2   -        generic      Running   tcp://<IP2>:2376           v1.12.1
crate-sw-3   -        generic      Running   tcp://<IP3>:2376           v1.12.1

Gather Your Swarm

For the machines to work together, we can create a swarm with Docker's Swarm feature, which works in a Master-Slave configuration. Hence, in order to continue, choose one of these machines to become the master node:

$ eval $(docker-machine env crate-sw-1) # connect your local docker client to the remote machine
$ docker swarm init --advertise-addr <MANAGER-IP>:2377
Swarm initialized: current node (4dbxlrkuxnq7m36p7u8hiwba4) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token <TOKEN> \
    <MANAGER-IP>:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

As the output of the previous command already suggests, join the worker nodes like this:

$ eval $(docker-machine env crate-sw-2) # connect the local docker client to one of the worker nodes
$ docker swarm join --token <TOKEN> <MANAGER-IP>:2377
This node joined a swarm as a worker.
$ eval $(docker-machine env crate-sw-3) # connect the local docker client to the other worker node
$ docker swarm join --token <TOKEN> <MANAGER-IP>:2377
This node joined a swarm as a worker.

Your swarm is now up and running, ready for a deployment.

Ready to Serve

For the Crate cluster to talk to the nodes, we still need a network, which - for now - will be a Docker overlay network. For a production deployment, this should be a more informed choice since a software-emulated network connection might not perform as expected. In order to create the network run the following command:

$ docker network create -d overlay crate-network
6p7nxdurhzw2hcwy3bjm2z2wo

Finally we are able to run Crate on top of the Docker cluster! For this, a docker service is required - which is created by this command:

$ eval $(docker-machine env crate-sw-1) # connect your local docker client to the remote machine
$ docker service create \
    --name crate \
    --network crate-network \
    --mode global \
    --endpoint-mode dnsrr \
    --update-parallelism 1 \
    --update-delay 60s \
    --mount type=bind,source=/tmp,target=/data \
  crate:latest \
    crate \
    -Des.discovery.zen.ping.multicast.enabled=false \
    -Des.discovery.zen.ping.unicast.hosts=crate \
    -Des.gateway.expected_nodes=3 \
    -Des.discovery.zen.minimum_master_nodes=2 \
    -Des.gateway.recover_after_nodes=2 \
    -Des.network.bind=_eth0:ipv4_

There are several things that need explanation in there:

    --name crate #\
    --network crate-network \
    --mode global \

These lines name the cluster 'crate', add the previously created overlay network to the service and set the scaling mode to "global", which means that there will always one container per node. Next the parameters:

    --endpoint-mode dnsrr \
    --update-parallelism 1 \
    --update-delay 60s \
    --mount type=bind,source=/tmp,target=/data \

define the endpoint mode to be 'dnsrr' (DNS round robin - the dns entry 'crate' should cycle through the available IP addresses), update-parallelism and update-delay set how many instances should be updated in parallel and with what delay. Finally there is some volume mapping to ensure Crate is writing data to a persistent volume outside of the container (don't use /tmp 's tmpfs for storing data. It will be gone after restarting).

  crate:latest \
    crate \
    -Des.discovery.zen.ping.multicast.enabled=false \
    -Des.discovery.zen.ping.unicast.hosts=crate \
    -Des.gateway.expected_nodes=3 \
    -Des.discovery.zen.minimum_master_nodes=2 \
    -Des.gateway.recover_after_nodes=2 \
    -Des.network.bind=_eth0:ipv4_

This part is the same for many Docker deployments of Crate:

  • Multicast: disabled since it's not supported on the overlay networks
  • Unicast hosts: the Docker DNS record 'crate' (the service name) maps an IP address to the service name and let's the nodes discover each other
  • Quorum parameters: since we are working with a 3-node cluster, the expected_nodes, minimum_master_nodes, and recover_after_nodes are set accordingly
  • Set Crate's network binding: the last line tells Crate to explicitly listen and bind to the IPv4 address of eth0 (check if this is your crate-network NIC); required to work around container networking magic

Scaling

By running the docker service in 'global mode', it automatically assigns one instance per node only. This means that by changing the number of nodes in a swarm, it will automatically scale up or down (also think of reconfiguring the quorums)). To take control of this behavior, we recommend customizing service constraints and labels. Alternatively 'replication mode' would allow for explicit scaling, but will also deploy more than one instance per machine, which is greatly discouraged since it gives a false impression about replication and resource availability.

Next Steps

This setup gives you a working cluster but by setting the endpoint-mode to DNSRR, Docker disables its included load balancer and does not assign a public IP or port mapping to the cluster. The _other_ setting VIP (virtual IP) would provide a load-balanced public endpoint, but prohibits communication between the nodes. While we (and others) consider this a bug, a fix by Docker has yet (as of mid-September 2016) to be released.

However since the ports are mapped automatically, there are steps to remedy this:

  • Use NGINX (or any reverse proxy server) to distribute requests among the swarm nodes by enumerating all container IPs in an upstream section
  • Use a firewall (firewalld, IPTables, UFW ...) to forward requests to the ports to the containers (but be aware that the overlay network also requires certain ports)

Docker's 1.12 release marks a big step forward in unifying orchestration and management tools for a containerized environment, which is also a great place to run Crate and your production microservices. Furthermore the rolling upgrade features of the Docker swarm makes it really easy to keep Crate at the latest version without downtime or risk of data loss.

Do you run Crate with Docker (1.12)? Tell us!