0
votes

I'm using Kubernetes deployment with persistent volume to run my application, like this example; https://github.com/kubernetes/kubernetes/tree/master/examples/mysql-wordpress-pd , but when I try to add more replicas or autoscale, all the new pods try to connect to the same volume. How can I simultaneously auto create new volumes for each new pod., like statefulsets(petsets) are able to do it.

1
PetSets (or StatefulSets in Kubernetes 1.5) are designed to solve this problem.... why don't you use them?Mark O'Connor
I just want to know if it's possible to do it with "Deployment" and use something like "volumeClaimTemplates" to automatically generate new volumes which PetSets/StatefulSets are usingmontatich
@montatich, You cannot do that with Deployment. Deployments manage ReplicaSets which are used for Stateless applications which do not typically need access to specific storage of their own. The right solution would be a StatefulSets, or multiple ReplicaSets connecting to their own storage.Anirudh Ramanathan
thank you, I'm gonna use StatefulSetsmontatich

1 Answers

0
votes

The conclusion I reached for K8S 1.6 is you can't. However, you can use NFS. If, like CrateDB, your cluster can create a folder for each node under the volume mount, then you can auto-scale. So, I auto-scale CrateDB as a Deployment using this configuration:

https://github.com/erik777/kubernetes-cratedb

which relies on an nfs-server, which I deploy as an RC with PVC/PV:

SAME_BASE/kubernetes-nfs-server

It is on my TODO list to exlpore distributed file systems such as GluterFS. For K8S Deployments, your choice of file system is your remedy.

You can also engage the scalability and storage SIGs in the K8S community to help prioritize this use-case. Adding the capability to K8S removes the requirement for a clustering solution to handle node separation in a shared volume, as well as prevent the introduction of additional points of failure between the clustered app and the PV.

GITHUB kubernetes/community

Hopefully, we can see a K8S OTB solution by 2.0.

(NOTE: Had to change 2 of the GITHUB links because I don't have "10 reputation")