0
votes

Our company is developing an application which runs in 3 seperate kubernetes-clusters in different versions (production, staging, testing). We need to monitor our clusters and the applications over time (metrics and logs). We also need to run a mailserver.

So basically we have 3 different environments with different versions of our application. And we have some shared services that just need to run and we do not care much about them:

  • Monitoring: We need to install influxdb and grafana. In every cluster there's a pre-installed heapster, that needs to send data to our tools.
  • Logging: We didn't decide yet.
  • Mailserver (https://github.com/tomav/docker-mailserver)
  • independant services: Sentry, Gitlab

I am not sure where to run these external shared services. I found these options:

1. Inside each cluster

We need to install the tools 3 times for the 3 environments.

Con:

  • We don't have one central point to analyze our systems.
  • If the whole cluster is down, we cannot look at anything.
  • Installing the same tools multiple times does not feel right.

2. Create an additional cluster

We install the shared tools in an additional kubernetes-cluster.

Con:

  • Cost for an additional cluster
  • It's probably harder to send ongoing data to external cluster (networking, security, firewall etc.).

3) Use an additional root-server

We run docker-containers on an oldschool-root-server.

Con:

  • Feels contradictory to use root-server instead of cutting-edge-k8s.
  • Single point of failure.
  • We need to control the docker-containers manually (or attach the machine to rancher).

I tried to google for the problem but I cannot find anything about the topic. Can anyone give me a hint or some links on this topic? Or is it just no relevant problem that a cluster might go down?

To me, the second option sound less evil but I cannot estimate yet if it's hard to transfer data from one cluster to another.

The important questions are:

  • Is it a problem to have monitoring-data in a cluster because one cannot see the monitoring-data if the cluster is offline?
  • Is it common practice to have an additional cluster for shared services that should not have an impact on other parts of the application?
  • Is it (easily) possible to send metrics and logs from one kubernetes-cluster to another (we are running kubernetes in OpenTelekomCloud which is basically OpenStack)?

Thanks for your hints,

Marius

1
All your questions is depending of the solutions which you use for monitoring, logging etc. Maybe you can add some more details?Anton Kostenko
I added some information. The pre-installed Heapster might be interesting.Marius

1 Answers

1
votes

That is a very complex and philosophic topic, but I will give you my view on it and some facts to support it.

I think the best way is the second one - Create an additional cluster, and that's why:

  1. You need a point which should be accessible from any of your environments. With a separate cluster, you can set the same firewall rules, routes, etc. in all your environments and it doesn't affect your current workload.

  2. Yes, you need to pay a bit more. However, you need resources to run your shared applications, and overhead for a Kubernetes infrastructure is not high in comparison with applications.

  3. With a separate cluster, you can setup a real HA solution, which you might not need for staging and development clusters, so you will not pay for that multiple times.

  4. Technically, it is also OK. You can use Heapster to collect data from multiple clusters; almost any logging solution can also work with multiple clusters. All other applications can be just run on the separate cluster, and that's all you need to do with them.

Now, about your questions:

Is it a problem to have monitoring-data in a cluster because one cannot see the monitoring-data if the cluster is offline?

No, it is not a problem with a separate cluster.

Is it common practice to have an additional cluster for shared services that should not have an impact on other parts of the application?

I think, yes. At least I did it several times, and I know some other projects with similar architecture.

Is it (easily) possible to send metrics and logs from one kubernetes-cluster to another (we are running kubernetes in OpenTelekomCloud which is basically OpenStack)?

Yes, nothing complex there. Usually, it does not depend on the platform.