New to KubeDB? Please start here.
Exploring Fault Tolerance in Druid with KubeDB
Understanding High Availability and Failover in Druid on KubeDB
Failover
in Druid refers to the process of automatically switching to a standby or replica node
when a critical service (like Coordinator or Overlord) fails. In distributed analytics systems,
this ensures that ingestion, query, and management operations remain available even if one or more
pods go down. KubeDB makes this seamless by managing Druid’s lifecycle and health on Kubernetes.
Druid’s architecture consists of several node types:
- Coordinator: Manages data segment availability and balancing.
- Overlord: Handles task management and ingestion.
- Broker: Routes queries to Historical and Real-time nodes.
- Historical: Stores immutable, queryable data segments.
- MiddleManager: Executes ingestion tasks.
KubeDB supports running multiple replicas for each role, providing high availability and automated failover. If a pod fails, KubeDB ensures a replacement is started and the cluster remains operational.
In this guide, you’ll:
- Deploy a highly available Druid cluster
- Verify the roles and health of Druid pods
- Simulate failures and observe automated failover
- Validate data/query continuity
Before You Begin
At first, you need to have a Kubernetes cluster, and the kubectl
command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using kind.
Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps here and make sure to include the flags --set global.featureGates.Druid=true
to ensure Druid CRD and --set global.featureGates.ZooKeeper=true
to ensure ZooKeeper CRD as Druid depends on ZooKeeper for external dependency with helm command.
To keep things isolated, this tutorial uses a separate namespace called demo
throughout this tutorial.
$ kubectl create namespace demo
namespace/demo created
$ kubectl get namespace
NAME STATUS AGE
demo Active 9s
Note: YAML files used in this tutorial are stored in guides/druid/quickstart/overview/yamls folder in GitHub repository kubedb/docs.
We have designed this tutorial to demonstrate a production setup of KubeDB managed Apache Druid. If you just want to try out KubeDB, you can bypass some safety features following the tips here.
Find Available StorageClass
We will have to provide StorageClass
in Druid CRD specification. Check available StorageClass
in your cluster using the following command,
$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 14h
Here, we have standard
StorageClass in our cluster from Local Path Provisioner.
Find Available DruidVersion
When you install the KubeDB operator, it registers a CRD named DruidVersion. The installation process comes with a set of tested DruidVersion objects. Let’s check available DruidVersions by,
$ kubectl get druidversion
NAME VERSION DB_IMAGE DEPRECATED AGE
28.0.1 28.0.1 ghcr.io/appscode-images/druid:28.0.1 24h
30.0.1 30.0.1 ghcr.io/appscode-images/druid:30.0.1 24h
31.0.0 31.0.0 ghcr.io/appscode-images/druid:31.0.0 24h
Notice the DEPRECATED
column. Here, true
means that this DruidVersion is deprecated for the current KubeDB version. KubeDB will not work for deprecated DruidVersion. You can also use the short from drversion
to check available DruidVersions.
In this tutorial, we will use 28.0.1
DruidVersion CR to create a Druid cluster.
Get External Dependencies Ready
Deep Storage
One of the external dependency of Druid is deep storage where the segments are stored. It is a storage mechanism that Apache Druid does not provide. Amazon S3, Google Cloud Storage, or Azure Blob Storage, S3-compatible storage (like Minio), or HDFS are generally convenient options for deep storage.
In this tutorial, we will run a minio-server
as deep storage in our local kind
cluster using minio-operator
and create a bucket named druid
in it, which the deployed druid database will use.
$ helm repo add minio https://operator.min.io/
$ helm repo update minio
$ helm upgrade --install --namespace "minio-operator" --create-namespace "minio-operator" minio/operator --set operator.replicaCount=1
$ helm upgrade --install --namespace "demo" --create-namespace druid-minio minio/tenant \
--set tenant.pools[0].servers=1 \
--set tenant.pools[0].volumesPerServer=1 \
--set tenant.pools[0].size=1Gi \
--set tenant.certificate.requestAutoCert=false \
--set tenant.buckets[0].name="druid" \
--set tenant.pools[0].name="default"
Now we need to create a Secret
named deep-storage-config
. It contains the necessary connection information using which the druid database will connect to the deep storage.
apiVersion: v1
kind: Secret
metadata:
name: deep-storage-config
namespace: demo
stringData:
druid.storage.type: "s3"
druid.storage.bucket: "druid"
druid.storage.baseKey: "druid/segments"
druid.s3.accessKey: "minio"
druid.s3.secretKey: "minio123"
druid.s3.protocol: "http"
druid.s3.enablePathStyleAccess: "true"
druid.s3.endpoint.signingRegion: "us-east-1"
druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
Let’s create the deep-storage-config
Secret shown above:
$ kubectl create -f https://github.com/kubedb/docs/raw/v2025.10.17/docs/examples/druid/quickstart/deep-storage-config.yaml
secret/deep-storage-config created
Deploy a Highly Available Druid Cluster
Create a Druid Cluster
The KubeDB operator implements a Druid CRD to define the specification of Druid.
The Druid instance used for this tutorial:
apiVersion: kubedb.com/v1alpha2
kind: Druid
metadata:
name: druid-cluster
namespace: demo
spec:
version: 31.0.0
deletionPolicy: Delete
deepStorage:
type: s3
configSecret:
name: deep-storage-config
topology:
coordinators:
replicas: 2
overlords:
replicas: 2
brokers:
replicas: 2
historicals:
replicas: 2
middleManagers:
replicas: 2
routers:
replicas: 2
$ kubectl apply -f https://github.com/kubedb/docs/raw/v2025.10.17/docs/examples/druid/quickstart/druid-with-monitoring.yaml
druid.kubedb.com/druid-quickstart created
The Druid’s STATUS
will go from Provisioning
to Ready
state within few minutes. Once the STATUS
is Ready
, you are ready to use the newly provisioned Druid cluster.
$ kubectl get druid -n demo -w
NAME TYPE VERSION STATUS AGE
druid-cluster kubedb.com/v1alpha2 31.0.0 Ready 4m12s
Inspect Druid Pod Roles and Health
You can monitor on another terminal the status until all pods are ready:
$ watch kubectl get druid,petset,pods -n demo
See the database is ready.
$ kubectl get druid,petset,pods -n demo
NAME TYPE VERSION STATUS AGE
druid.kubedb.com/druid-cluster kubedb.com/v1alpha2 31.0.0 Ready 15m
NAME AGE
petset.apps.k8s.appscode.com/druid-cluster-brokers 14m
petset.apps.k8s.appscode.com/druid-cluster-coordinators 14m
petset.apps.k8s.appscode.com/druid-cluster-historicals 14m
petset.apps.k8s.appscode.com/druid-cluster-middlemanagers 14m
petset.apps.k8s.appscode.com/druid-cluster-mysql-metadata 15m
petset.apps.k8s.appscode.com/druid-cluster-overlords 14m
petset.apps.k8s.appscode.com/druid-cluster-routers 14m
petset.apps.k8s.appscode.com/druid-cluster-zk 15m
NAME READY STATUS RESTARTS AGE
pod/druid-cluster-brokers-0 1/1 Running 0 14m
pod/druid-cluster-brokers-1 1/1 Running 0 14m
pod/druid-cluster-coordinators-0 1/1 Running 0 14m
pod/druid-cluster-coordinators-1 1/1 Running 0 14m
pod/druid-cluster-historicals-0 1/1 Running 0 14m
pod/druid-cluster-historicals-1 1/1 Running 0 14m
pod/druid-cluster-middlemanagers-0 1/1 Running 0 14m
pod/druid-cluster-middlemanagers-1 1/1 Running 0 14m
pod/druid-cluster-mysql-metadata-0 2/2 Running 0 15m
pod/druid-cluster-mysql-metadata-1 2/2 Running 0 15m
pod/druid-cluster-mysql-metadata-2 2/2 Running 0 15m
pod/druid-cluster-overlords-0 1/1 Running 0 14m
pod/druid-cluster-overlords-1 1/1 Running 0 14m
pod/druid-cluster-routers-0 1/1 Running 0 14m
pod/druid-cluster-routers-1 1/1 Running 0 14m
pod/druid-cluster-zk-0 1/1 Running 0 15m
pod/druid-cluster-zk-1 1/1 Running 0 15m
pod/druid-cluster-zk-2 1/1 Running 0 15m
pod/myminio-default-0 2/2 Running 0 3d21h
You can check the roles and status of Druid pods using labels:
$ kubectl get pods -n demo --show-labels | grep role
druid-cluster-brokers-0 1/1 Running 0 2d19h app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-cluster,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com,apps.kubernetes.io/pod-index=0,controller-revision-hash=druid-cluster-brokers-64667d6fbb,kubedb.com/role=brokers,statefulset.kubernetes.io/pod-name=druid-cluster-brokers-0
druid-cluster-brokers-1 1/1 Running 0 2d19h app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-cluster,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com,apps.kubernetes.io/pod-index=1,controller-revision-hash=druid-cluster-brokers-64667d6fbb,kubedb.com/role=brokers,statefulset.kubernetes.io/pod-name=druid-cluster-brokers-1
druid-cluster-coordinators-0 1/1 Running 0 2d19h app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-cluster,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com,apps.kubernetes.io/pod-index=0,controller-revision-hash=druid-cluster-coordinators-955d5f7c4,kubedb.com/role=coordinators,statefulset.kubernetes.io/pod-name=druid-cluster-coordinators-0
druid-cluster-coordinators-1 1/1 Running 0 2d19h app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-cluster,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com,apps.kubernetes.io/pod-index=1,controller-revision-hash=druid-cluster-coordinators-955d5f7c4,kubedb.com/role=coordinators,statefulset.kubernetes.io/pod-name=druid-cluster-coordinators-1
druid-cluster-historicals-0 1/1 Running 0 2d19h app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-cluster,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com,apps.kubernetes.io/pod-index=0,controller-revision-hash=druid-cluster-historicals-54894c9748,kubedb.com/role=historicals,statefulset.kubernetes.io/pod-name=druid-cluster-historicals-0
druid-cluster-historicals-1 1/1 Running 0 2d19h app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-cluster,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com,apps.kubernetes.io/pod-index=1,controller-revision-hash=druid-cluster-historicals-54894c9748,kubedb.com/role=historicals,statefulset.kubernetes.io/pod-name=druid-cluster-historicals-1
druid-cluster-middlemanagers-0 1/1 Running 0 2d19h app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-cluster,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com,apps.kubernetes.io/pod-index=0,controller-revision-hash=druid-cluster-middlemanagers-5556d8775c,kubedb.com/role=middleManagers,statefulset.kubernetes.io/pod-name=druid-cluster-middlemanagers-0
druid-cluster-middlemanagers-1 1/1 Running 0 2d19h app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-cluster,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com,apps.kubernetes.io/pod-index=1,controller-revision-hash=druid-cluster-middlemanagers-5556d8775c,kubedb.com/role=middleManagers,statefulset.kubernetes.io/pod-name=druid-cluster-middlemanagers-1
druid-cluster-mysql-metadata-0 2/2 Running 0 2d19h app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-cluster-mysql-metadata,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=mysqls.kubedb.com,apps.kubernetes.io/pod-index=0,controller-revision-hash=druid-cluster-mysql-metadata-d99dc44fc,kubedb.com/role=primary,statefulset.kubernetes.io/pod-name=druid-cluster-mysql-metadata-0
druid-cluster-mysql-metadata-1 2/2 Running 0 2d19h app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-cluster-mysql-metadata,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=mysqls.kubedb.com,apps.kubernetes.io/pod-index=1,controller-revision-hash=druid-cluster-mysql-metadata-d99dc44fc,kubedb.com/role=standby,statefulset.kubernetes.io/pod-name=druid-cluster-mysql-metadata-1
druid-cluster-mysql-metadata-2 2/2 Running 0 2d19h app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-cluster-mysql-metadata,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=mysqls.kubedb.com,apps.kubernetes.io/pod-index=2,controller-revision-hash=druid-cluster-mysql-metadata-d99dc44fc,kubedb.com/role=standby,statefulset.kubernetes.io/pod-name=druid-cluster-mysql-metadata-2
druid-cluster-overlords-0 1/1 Running 0 2d19h app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-cluster,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com,apps.kubernetes.io/pod-index=0,controller-revision-hash=druid-cluster-overlords-d8fd8d477,kubedb.com/role=overlords,statefulset.kubernetes.io/pod-name=druid-cluster-overlords-0
druid-cluster-overlords-1 1/1 Running 0 2d19h app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-cluster,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com,apps.kubernetes.io/pod-index=1,controller-revision-hash=druid-cluster-overlords-d8fd8d477,kubedb.com/role=overlords,statefulset.kubernetes.io/pod-name=druid-cluster-overlords-1
druid-cluster-routers-0 1/1 Running 0 2d19h app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-cluster,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com,apps.kubernetes.io/pod-index=0,controller-revision-hash=druid-cluster-routers-86f759b75b,kubedb.com/role=routers,statefulset.kubernetes.io/pod-name=druid-cluster-routers-0
druid-cluster-routers-1 1/1 Running 0 2d19h app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-cluster,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com,apps.kubernetes.io/pod-index=1,controller-revision-hash=druid-cluster-routers-86f759b75b,kubedb.com/role=routers,statefulset.kubernetes.io/pod-name=druid-cluster-routers-1
How Failover Works in Druid with KubeDB
KubeDB continuously monitors the health of Druid pods. If a Coordinator, Overlord, or any other critical pod fails (due to crash, node failure, or manual deletion), KubeDB:
- Detects the failure
- Automatically creates a replacement pod
- Ensures the new pod joins the cluster and resumes its role
This process is automatic and typically completes within seconds, ensuring minimal disruption. YOu can learn more from here.
Hands-on Failover Testing
case 1: Delete a Zookeeper Cluster
For highly-available ZooKeeper, KubeDB
provides a cluster of 3 Zookeeper
nodes. You can delete one of the Zookeeper
pods to see how KubeDB
handles failover.
Delete a Zookeeper
pod
$ kubectl delete pod -n demo druid-cluster-zk-0
pod "druid-cluster-zk-0" deleted
in another terminal you can watch their status
$ watch -n 2 "kubectl get pods -n demo -o jsonpath='{range .items[*]}{.metadata.name} {.metadata.labels.kubedb\\.com/role}{\"\\n\"}{end}'"
druid-cluster-brokers-0 brokers
druid-cluster-brokers-1 brokers
druid-cluster-coordinators-0 coordinators
druid-cluster-coordinators-1 coordinators
druid-cluster-historicals-0 historicals
druid-cluster-historicals-1 historicals
druid-cluster-middlemanagers-0 middleManagers
druid-cluster-middlemanagers-1 middleManagers
druid-cluster-mysql-metadata-0 primary
druid-cluster-mysql-metadata-1 standby
druid-cluster-mysql-metadata-2 standby
druid-cluster-overlords-0 overlords
druid-cluster-overlords-1 overlords
druid-cluster-routers-0 routers
druid-cluster-routers-1 routers
druid-cluster-zk-0
druid-cluster-zk-1
druid-cluster-zk-2
myminio-default-0
You can not see that a new druid-cluster-zk-0
pod is created automatically and joins the Zookeeper
ensemble. Because Zookeeper
is highly available, the Druid
cluster continues to function without interruption.
Note: You should not delete more than one
ZooKeeper
pod at a time, sinceZooKeeper
relies on a majority of nodes to maintain quorum. If all nodes are deleted, quorum is lost and the entire ZooKeeper ensemble becomes unavailable. This in turn disrupts the Druid cluster’s ability to coordinate and manage its distributed components, which may result in service interruptions and potential data loss.
case 2: Delete a MySQL Pod
Druid uses MySQL for metadata storage. Each of the nods has their role also, you can see the role of each pod.
Delete the primary
MySQL pod
$ kubectl delete pod -n demo druid-cluster-mysql-metadata-0
pod "druid-cluster-mysql-metadata-0" deleted
You can delete druid-cluster-mysql-metadata-0
pods which has primary
role to see how KubeDB handles failover.
druid-cluster-brokers-0 brokers
druid-cluster-brokers-1 brokers
druid-cluster-coordinators-0 coordinators
druid-cluster-coordinators-1 coordinators
druid-cluster-historicals-0 historicals
druid-cluster-historicals-1 historicals
druid-cluster-middlemanagers-0 middleManagers
druid-cluster-middlemanagers-1 middleManagers
druid-cluster-mysql-metadata-0 primary
druid-cluster-mysql-metadata-1 standby
druid-cluster-mysql-metadata-2 primary
druid-cluster-overlords-0 overlords
druid-cluster-overlords-1 overlords
druid-cluster-routers-0 routers
druid-cluster-routers-1 routers
druid-cluster-zk-0
druid-cluster-zk-1
druid-cluster-zk-2
myminio-default-0
You can see how quickly another pod takes the role of primary
and the deleted pod is recreated automatically after a few seconds with the role of standby
.
druid-cluster-brokers-0 brokers
druid-cluster-brokers-1 brokers
druid-cluster-coordinators-0 coordinators
druid-cluster-coordinators-1 coordinators
druid-cluster-historicals-0 historicals
druid-cluster-historicals-1 historicals
druid-cluster-middlemanagers-0 middleManagers
druid-cluster-middlemanagers-1 middleManagers
druid-cluster-mysql-metadata-0 standby
druid-cluster-mysql-metadata-1 standby
druid-cluster-mysql-metadata-2 primary
druid-cluster-overlords-0 overlords
druid-cluster-overlords-1 overlords
druid-cluster-routers-0 routers
druid-cluster-routers-1 routers
druid-cluster-zk-0
druid-cluster-zk-1
druid-cluster-zk-2
myminio-default-0
Delete two standby
MySQL pod
You can also delete druid-cluster-mysql-metadata-1
pods which has standby
role to see how KubeDB handles failover.
$ kubectl delete pod -n demo druid-cluster-mysql-metadata-0 druid-cluster-mysql-metadata-1
pod "druid-cluster-mysql-metadata-0" deleted
pod "druid-cluster-mysql-metadata-1" deleted
For few seconds, you will see that standby
roles are missing.
druid-cluster-brokers-0 brokers
druid-cluster-brokers-1 brokers
druid-cluster-coordinators-0 coordinators
druid-cluster-coordinators-1 coordinators
druid-cluster-historicals-0 historicals
druid-cluster-historicals-1 historicals
druid-cluster-middlemanagers-0 middleManagers
druid-cluster-middlemanagers-1 middleManagers
druid-cluster-mysql-metadata-0
druid-cluster-mysql-metadata-1
druid-cluster-mysql-metadata-2 primary
druid-cluster-overlords-0 overlords
druid-cluster-overlords-1 overlords
druid-cluster-routers-0 routers
druid-cluster-routers-1 routers
druid-cluster-zk-0
druid-cluster-zk-1
druid-cluster-zk-2
myminio-default-0
After 10-30 seconds you can see that the deleted pods are recreated automatically and join the MySQL cluster with their respective roles.
druid-cluster-brokers-0 brokers
druid-cluster-brokers-1 brokers
druid-cluster-coordinators-0 coordinators
druid-cluster-coordinators-1 coordinators
druid-cluster-historicals-0 historicals
druid-cluster-historicals-1 historicals
druid-cluster-middlemanagers-0 middleManagers
druid-cluster-middlemanagers-1 middleManagers
druid-cluster-mysql-metadata-0 standby
druid-cluster-mysql-metadata-1 standby
druid-cluster-mysql-metadata-2 primary
druid-cluster-overlords-0 overlords
druid-cluster-overlords-1 overlords
druid-cluster-routers-0 routers
druid-cluster-routers-1 routers
druid-cluster-zk-0
druid-cluster-zk-1
druid-cluster-zk-2
myminio-default-0
Note: You should not delete all
MySQL
pods at a time, sinceMySQL
relies on a majority of nodes to maintain quorum. If all nodes are deleted, quorum is lost and the entire MySQL ensemble becomes unavailable. This in turn disrupts the Druid cluster’s ability to coordinate and manage its distributed components, which may result in service interruptions and potential data loss.
Case 3: Delete a Broker Pod
Druid Brokers can be scaled out and all running servers will be active and queryable. We recommend placing them behind a load balancer.
Delete a Broker
pod and observe failover:
$ kubectl delete pod -n demo druid-cluster-brokers-0
pod "druid-cluster-brokers-0" deleted
Monitor the pods:
druid-cluster-brokers-0 brokers
druid-cluster-brokers-1 brokers
druid-cluster-coordinators-0 coordinators
druid-cluster-coordinators-1 coordinators
druid-cluster-historicals-0 historicals
druid-cluster-historicals-1 historicals
druid-cluster-middlemanagers-0 middleManagers
druid-cluster-middlemanagers-1 middleManagers
druid-cluster-mysql-metadata-0 standby
druid-cluster-mysql-metadata-1 primary
druid-cluster-mysql-metadata-2 standby
druid-cluster-overlords-0 overlords
druid-cluster-overlords-1 overlords
druid-cluster-routers-0 routers
druid-cluster-routers-1 routers
druid-cluster-zk-0
druid-cluster-zk-1
druid-cluster-zk-2
myminio-default-0
Since Broker
pods are stateless, they can be seamlessly recreated without affecting the cluster’s operations, ensuring no disruptions occur.
Case 4: Delete an Overlord Or Coordinator Pod
For highly-available Apache Druid Coordinators and Overlords, we recommend to run multiple servers. If they are all configured to use the same ZooKeeper cluster and metadata storage, then they will automatically failover between each other as necessary. Only one will be active at a time, but inactive servers will redirect to the currently active server.
Delete a Coordinator Pod
$ kubectl delete pod -n demo druid-cluster-coordinators-0
pod "druid-cluster-coordinators-0" deleted
druid-cluster-brokers-0 brokers
druid-cluster-brokers-1 brokers
druid-cluster-coordinators-0 coordinators
druid-cluster-coordinators-1 coordinators
druid-cluster-historicals-0 historicals
druid-cluster-historicals-1 historicals
druid-cluster-middlemanagers-0 middleManagers
druid-cluster-middlemanagers-1 middleManagers
druid-cluster-mysql-metadata-0 primary
druid-cluster-mysql-metadata-1 standby
druid-cluster-mysql-metadata-2 standby
druid-cluster-overlords-0 overlords
druid-cluster-overlords-1 overlords
druid-cluster-routers-0 routers
druid-cluster-routers-1 routers
druid-cluster-zk-0
druid-cluster-zk-1
druid-cluster-zk-2
myminio-default-0
Delete a Overlord Pod
$ kubectl delete pod -n demo druid-cluster-overlords-0
pod "druid-cluster-overlords-0" deleted
druid-cluster-brokers-0 brokers
druid-cluster-brokers-1 brokers
druid-cluster-coordinators-0 coordinators
druid-cluster-coordinators-1 coordinators
druid-cluster-historicals-0 historicals
druid-cluster-historicals-1 historicals
druid-cluster-middlemanagers-0 middleManagers
druid-cluster-middlemanagers-1 middleManagers
druid-cluster-mysql-metadata-0 primary
druid-cluster-mysql-metadata-1 standby
druid-cluster-mysql-metadata-2 standby
druid-cluster-overlords-0 overlords
druid-cluster-overlords-1 overlords
druid-cluster-routers-0 routers
druid-cluster-routers-1 routers
druid-cluster-zk-0
druid-cluster-zk-1
druid-cluster-zk-2
myminio-default-0
Note: You should not delete all
Coordinator
orOverlord
pods at a time, since only one pod can be active at a time. This in turn disrupts the Druid cluster’s ability to coordinate and manage its distributed components, which may result in service interruptions and potential data loss.
Cleanup
To clean up run:
$ kubectl delete druid -n demo druid-cluster
$ kubectl delete ns demo
Next Steps
- Learn about backup and restore for Druid using Stash.
- Monitor your Druid cluster with Prometheus integration.
- Explore Druid configuration options.
- Contribute to KubeDB: contribution guidelines.