I'm using Kubernetes deployment with persistent volume to run my application, like this example; https://github.com/kubernetes/kubernetes/tree/master/examples/mysql-wordpress-pd , but when I try to add more replicas or autoscale, all the new pods try to connect to the same volume. How can I simultaneously auto create new volumes for each new pod., like statefulsets(petsets) are able to do it.
1 Answers
The conclusion I reached for K8S 1.6 is you can't. However, you can use NFS. If, like CrateDB, your cluster can create a folder for each node under the volume mount, then you can auto-scale. So, I auto-scale CrateDB as a Deployment using this configuration:
https://github.com/erik777/kubernetes-cratedb
which relies on an nfs-server, which I deploy as an RC with PVC/PV:
SAME_BASE/kubernetes-nfs-server
It is on my TODO list to exlpore distributed file systems such as GluterFS. For K8S Deployments, your choice of file system is your remedy.
You can also engage the scalability and storage SIGs in the K8S community to help prioritize this use-case. Adding the capability to K8S removes the requirement for a clustering solution to handle node separation in a shared volume, as well as prevent the introduction of additional points of failure between the clustered app and the PV.
GITHUB kubernetes/community
Hopefully, we can see a K8S OTB solution by 2.0.
(NOTE: Had to change 2 of the GITHUB links because I don't have "10 reputation")