1
votes

I executed a scenario where I deployed Microsoft SQL Database on my K8s cluster using PV and PVC. It work well but I see some strange behaviour. I created PV but it is only visible on one node and not on other workers nodes. What am I missing here, any inputs please?

Background:

Server 1 - Master

Server 2 - Worker

Server 3 - Worker

Server 4 - Worker

Pod : "MyDb" is running on Server (Node) 4 without any replica set. I am guessing because my POD is running on server-4, PV got created on server four when created POD and refer PVC (claim) in it.

Please let me know your thought on this issue or share your inputs about mounting shared disk in production cluster.

Those who want to deploy SQL DB on K8s cluster, can refer blog posted by Philips. Link below,

https://www.phillipsj.net/posts/sql-server-on-linux-on-kubernetes-part-1/ (without PV)

https://www.phillipsj.net/posts/sql-server-on-linux-on-kubernetes-part-2/ (with PV and Claim)

Regards, Farooq


Please see below my findings of my original problem statement. Problem: POD for SQL Server was created. At runtime K8s created this pod on server-4 hence created PV on server-4. However, on other node PV path wasn't created (/tmp/sqldata_.

  • I shutdown server-4 node and run command for deleting SQL pod (no replica was used so initially).
  • Status of POD changed to "Terminating" POD
  • Nothing happened for a while.
  • I restarted server-4 and noticed POD got deleted immediately.

Next Step: - I stopped server-4 again and created same pod. - POD was created on server-3 node at runtime and I see PV (/tmp/sqldata) was created as well on server-3. However, all my data (example samples tables) are was lost. It is fresh new PV on server 3 now.

I am assuming PV would be mounted volume of external storage and not storage/disk from any node in cluster.

1
You'll need to post your kubernetes configuration and any errors you see to get much useful help.PaulProgrammer
Hi Paul, I am not getting error rather noticing K8s behaviour that is not very clear. I created PV and refer it in my POD. However, that PV is not visible on other nodes so I wonder if server, where POD is created by K8s, goes down and POD go created on other active node, it may not have view of old data.Solutions Architect

1 Answers

2
votes

I am guessing because my POD is running on server-4, PV got created on server four when created POD and refer PVC (claim) in it.

This is more or less correct and you should be able to verify this by simply deleting the Pod and recreating it (since you say you do not have a ReplicaSet doing that for you). The PersistentVolume will then be visible on the node where the Pod is scheduled to.

Edit: The above assumes that you are using an external storage provider such as NFS or AWS EBS (see possible storage providers for Kubernetes). With HostPath the above does NOT apply and a PV will be created locally on a node (and will not be mounted to another node).

There is no reason to mount the PersistentVolume also to the other nodes. Imagine having hundreds of nodes, would you want to mount your PersistentVolume to all of them, while your Pod is just running on one?

You are also asking about "shared" disks. The PersistentVolume created in the blog post you linked is using ReadWriteMany, so you actually can start multiple Pods accessing the same volume (given that your storage supports that as well). But your software (a database in your case) needs to support having multiple processes accessing the same data.

Especially when considering databases, you should also look into StatefulSets, as this basically allows you to define Pods that are always using the same storage, which can be very interesting for databases. Wherever you should run or not run databases on Kubernetes is a whole different topic...