You are looking at the documentation of a prior release. To read the documentation of the latest release, please visit here.

New to KubeDB? Please start here.

Using Prometheus (CoreOS operator) with KubeDB

This tutorial will show you how to monitor Elasticsearch database using Prometheus via CoreOS Prometheus Operator.

Before You begin

At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube.

Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps here.

To keep things isolated, this tutorial uses a separate namespace called demo throughout this tutorial.

$ kubectl create ns demo
namespace "demo" created

$ kubectl get ns demo
NAME    STATUS  AGE
demo    Active  5s

Note: Yaml files used in this tutorial are stored in docs/examples/elasticsearch folder in github repository kubedb/cli.

This tutorial assumes that you are familiar with Elasticsearch concept.

Deploy CoreOS-Prometheus Operator

In RBAC enabled cluster

If RBAC is enabled, run the following command to prepare your cluster for this tutorial

$ kubectl create -f https://raw.githubusercontent.com/kubedb/cli/0.8.0-beta.2/docs/examples/monitoring/coreos-operator/rbac/demo-0.yaml
clusterrole "prometheus-operator" created
serviceaccount "prometheus-operator" created
clusterrolebinding "prometheus-operator" created
deployment "prometheus-operator" created

Watch the Deployment’s Pods.

$ kubectl get pods -n demo --selector=operator=prometheus --watch
NAME                                   READY     STATUS    RESTARTS   AGE
prometheus-operator-79cb9dcd4b-24khh   1/1       Running   0          46s

This CoreOS-Prometheus operator will create some supported Custom Resource Definition (CRD).

$ kubectl get crd
NAME                                    AGE
alertmanagers.monitoring.coreos.com     3m
prometheuses.monitoring.coreos.com      3m
servicemonitors.monitoring.coreos.com   3m

Once the Prometheus CRDs are registered, run the following command to create a Prometheus.

$ kubectl create -f https://raw.githubusercontent.com/kubedb/cli/0.8.0-beta.2/docs/examples/monitoring/coreos-operator/rbac/demo-1.yaml
clusterrole "prometheus" created
serviceaccount "prometheus" created
clusterrolebinding "prometheus" created
prometheus "prometheus" created
service "prometheus" created

Verify RBAC stuffs

$ kubectl get clusterroles
NAME                  AGE
prometheus            1m
prometheus-operator   5m
$ kubectl get clusterrolebindings
NAME                  AGE
prometheus            1m
prometheus-operator   5m

In RBAC not enabled cluster

If RBAC is not enabled, Run the following command to prepare your cluster for this tutorial:

$ https://raw.githubusercontent.com/kubedb/cli/0.8.0-beta.2/docs/examples/monitoring/coreos-operator/demo-0.yaml
deployment "prometheus-operator" created

Watch the Deployment’s Pods.

$ kubectl get pods -n demo --selector=operator=prometheus --watch
NAME                                   READY     STATUS    RESTARTS   AGE
prometheus-operator-79cb9dcd4b-24khh   1/1       Running   0          46s

This CoreOS-Prometheus operator will create some supported Custom Resource Definition (CRD).

$ kubectl get crd
NAME                                    AGE
alertmanagers.monitoring.coreos.com     3m
prometheuses.monitoring.coreos.com      3m
servicemonitors.monitoring.coreos.com   3m

Once the Prometheus operator CRDs are registered, run the following command to create a Prometheus.

$ https://raw.githubusercontent.com/kubedb/cli/0.8.0-beta.2/docs/examples/monitoring/coreos-operator/demo-1.yaml
prometheus "prometheus" created
service "prometheus" created

Prometheus Dashboard

Now open prometheus dashboard on browser by running minikube service prometheus -n demo.

Or you can get the URL of prometheus Service by running following command

$ minikube service prometheus -n demo --url
http://192.168.99.100:30900

Now, if you go to the Prometheus Dashboard, you will see that target list is now empty.

Monitor Elasticsearch with CoreOS Prometheus

Below is the Elasticsearch object created in this tutorial.

apiVersion: kubedb.com/v1alpha1
kind: Elasticsearch
metadata:
  name: coreos-prom-es
  namespace: demo
spec:
  version: 5.6
  storage:
    storageClassName: "standard"
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 50Mi
  monitor:
    agent: prometheus.io/coreos-operator
    prometheus:
      namespace: demo
      labels:
        app: kubedb
      interval: 10s

Here,

  • monitor.agent indicates the monitoring agent. Currently only valid value currently is coreos-prometheus-operator
  • monitor.prometheus specifies the information for monitoring by prometheus
    • prometheus.namespace specifies the namespace where ServiceMonitor is created.
    • prometheus.labels specifies the labels applied to ServiceMonitor.
    • prometheus.port indicates the port for Elasticsearch exporter endpoint (default is 56790)
    • prometheus.interval indicates the scraping interval (eg, ’10s')

Now create this Elasticsearch object with monitoring spec

$ kubedb create -f https://raw.githubusercontent.com/kubedb/cli/0.8.0-beta.2/docs/examples/elasticsearch/monitoring/coreos-prom-es.yaml
validating "https://raw.githubusercontent.com/kubedb/cli/0.8.0-beta.2/docs/examples/elasticsearch/monitoring/coreos-prom-es.yaml"
elasticsearch "coreos-prom-es" created

KubeDB operator will create a ServiceMonitor object once the Elasticsearch is successfully running.

$ kubedb get es -n demo builtin-prom-es
NAME              STATUS    AGE
builtin-prom-es   Running   5m

You can verify it running the following commands

$ kubectl get servicemonitor -n demo --selector="app=kubedb"
NAME                         AGE
kubedb-demo-coreos-prom-es   1m

Now, if you go the Prometheus Dashboard, you will see this database endpoint in target list.

   prometheus-builtin

Cleaning up

To cleanup the Kubernetes resources created by this tutorial, run following commands

$ kubedb delete es -n demo --all --force

$ kubectl delete ns demo
namespace "demo" deleted

Next Steps