Our company is developing an application which runs in 3 seperate kubernetes-clusters in different versions (production, staging, testing). We need to monitor our clusters and the applications over time (metrics and logs). We also need to run a mailserver.
So basically we have 3 different environments with different versions of our application. And we have some shared services that just need to run and we do not care much about them:
- Monitoring: We need to install influxdb and grafana. In every cluster there's a pre-installed heapster, that needs to send data to our tools.
- Logging: We didn't decide yet.
- Mailserver (https://github.com/tomav/docker-mailserver)
- independant services: Sentry, Gitlab
I am not sure where to run these external shared services. I found these options:
1. Inside each cluster
We need to install the tools 3 times for the 3 environments.
Con:
- We don't have one central point to analyze our systems.
- If the whole cluster is down, we cannot look at anything.
- Installing the same tools multiple times does not feel right.
2. Create an additional cluster
We install the shared tools in an additional kubernetes-cluster.
Con:
- Cost for an additional cluster
- It's probably harder to send ongoing data to external cluster (networking, security, firewall etc.).
3) Use an additional root-server
We run docker-containers on an oldschool-root-server.
Con:
- Feels contradictory to use root-server instead of cutting-edge-k8s.
- Single point of failure.
- We need to control the docker-containers manually (or attach the machine to rancher).
I tried to google for the problem but I cannot find anything about the topic. Can anyone give me a hint or some links on this topic? Or is it just no relevant problem that a cluster might go down?
To me, the second option sound less evil but I cannot estimate yet if it's hard to transfer data from one cluster to another.
The important questions are:
- Is it a problem to have monitoring-data in a cluster because one cannot see the monitoring-data if the cluster is offline?
- Is it common practice to have an additional cluster for shared services that should not have an impact on other parts of the application?
- Is it (easily) possible to send metrics and logs from one kubernetes-cluster to another (we are running kubernetes in
OpenTelekomCloud
which is basicallyOpenStack
)?
Thanks for your hints,
Marius