During our hackathon last weekend we came up with a template that would easily let users spin up a cluster with Kubernetes. Based on one of our users' work, we improved some details and in order to get a three-node testing cluster up and running you can use this:
apiVersion: v1
kind: Service
metadata:
name: crate-discovery
labels:
app: crate
spec:
ports:
- port: 4200
name: crate-web
- port: 4300
name: cluster
type: LoadBalancer
selector:
app: crate
---
apiVersion: "apps/v1alpha1"
kind: PetSet
metadata:
name: crate
spec:
serviceName: "crate-db"
replicas: 3
template:
metadata:
labels:
app: crate
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
containers:
- name: crate
image: crate:latest
command:
- /docker-entrypoint.sh
- -Des.cluster.name=${CLUSTER_NAME}
- -Des.multicast.enabled=false
- -Des.discovery.type=srv
- -Des.discovery.zen.minimum_master_nodes=2
- -Des.gateway.recover_after_nodes=2
- -Des.gateway.expected_nodes=${EXPECTED_NODES}
- -Des.discovery.srv.query=_cluster._tcp.crate-discovery.default.svc.cluster.local
volumeMounts:
- mountPath: /data
name: data
resources:
limits:
memory: 2Gi
ports:
- containerPort: 4200
name: db
- containerPort: 4300
name: cluster
env:
# Half the available memory.
- name: CRATE_HEAP_SIZE
value: "1g"
- name: EXPECTED_NODES
value: "3"
- name: CLUSTER_NAME
value: "my-crate-cluster"
volumes:
- name: data
hostPath:
emptyDir:
medium: "Memory"
This has certainly some trade-offs, and you can read all about them in our official documentation. Specifically, pay attention to storage, which should be suitable for high-availability data retention. Furthermore, this is a work in progress and will be updated as soon as there's new stuff that makes Crate and Kubernetes work even better :)
Tell us what you think!
We would love some feedback on the template if it helped your cluster management and what we could do better in order to help your use case(s)!