2
votes

I'm trying to setup a Kubernetes PetSet as described in the documentation. When I create the PetSet I can't seem to get the Persistent Volume Claim to bind to the persistent volume. Here is my Yaml File for defining the PetSet:

apiVersion: apps/v1alpha1
kind: PetSet
metadata:
  name: 'ml-nodes'
spec:
  serviceName: "ml-service"
  replicas: 1
  template:
    metadata:
      labels:
        app: marklogic
        tier: backend
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      containers:
        - name: 'ml'
          image: "192.168.201.7:5000/dcgs-sof/ml8-docker-final:v1"
          imagePullPolicy: Always
          ports:
            - containerPort: 8000
              name: ml8000
              protocol: TCP
            - containerPort: 8001
              name: ml8001
            - containerPort: 7997
              name: ml7997
            - containerPort: 8002
              name: ml8002
            - containerPort: 8040
              name: ml8040
            - containerPort: 8041
              name: ml8041
            - containerPort: 8042
              name: ml8042
          volumeMounts:
            - name: ml-data
              mountPath: /data/vol-data
          lifecycle:
            preStop:
              exec:
                # SIGTERM triggers a quick exit; gracefully terminate instead
                command: ["/etc/init.d/MarkLogic stop"]
      volumes:
        - name: ml-data
          persistentVolumeClaim:
            claimName: ml-data 
      terminationGracePeriodSeconds: 30
  volumeClaimTemplates:
    - metadata:
        name: ml-data
        annotations:
          volume.alpha.kubernetes.io/storage-class: anything
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 2Gi

If I do a 'describe' on my created PetSet I see the following:

Name:           ml-nodes
Namespace:      default
Image(s):       192.168.201.7:5000/dcgs-sof/ml8-docker-final:v1
Selector:       app=marklogic,tier=backend
Labels:         app=marklogic,tier=backend
Replicas:       1 current / 1 desired
Annotations:        <none>
CreationTimestamp:  Tue, 20 Sep 2016 13:23:14 -0400
Pods Status:        0 Running / 1 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
  FirstSeen LastSeen    Count   From        SubobjectPath   Type        Reason          Message
  --------- --------    -----   ----        -------------   --------    ------          -------
  33m       33m     1   {petset }           Warning     FailedCreate        pvc: ml-data-ml-nodes-0, error: persistentvolumeclaims "ml-data-ml-nodes-0" not found
  33m       33m     1   {petset }           Normal      SuccessfulCreate    pet: ml-nodes-0

I'm trying to run this in a minikube environment on my local machine. Not sure what I'm missing here???

2

2 Answers

2
votes

There is an open issue on minikube for this. Persistent volume provisioning support appears to be unfinished in minikube at this time.

For it to work with local storage, it needs the following flag on the controller manager and that isn't currently enabled on minikube.

--enable-hostpath-provisioner[=false]: Enable HostPath PV provisioning when running without a cloud provider. This allows testing and development of provisioning features. HostPath provisioning is not supported in any way, won't work in a multi-node cluster, and should not be used for anything other than testing or development.

Reference: http://kubernetes.io/docs/admin/kube-controller-manager/

For local development/testing, it would work if you were to use hack/local_up_cluster.sh to start a local cluster, after setting an environment variable:

export ENABLE_HOSTPATH_PROVISIONER=true 
0
votes

You should be able to use PetSets in the latest version of minikube as it uses kubernetes v1.4.1 as the default version.