5
votes

I'm playing with kubernetes and google container engine (GKE).

I deployed a container from this image jupyter/all-spark-notebook

This is my replication controller :

{
  "apiVersion": "v1",
  "kind": "ReplicationController",
  "metadata": {
    "name": "datalab-notebook"
  },
  "spec": {
    "replicas": 1,
    "selector": {
      "app": "datalab-notebook"
    },
    "template": {
      "metadata": {
        "name": "datalab-notebook",
        "labels": {
          "environment": "TEST",
          "app": "datalab-notebook"
        }
      },
      "spec": {
        "containers": [{
          "name": "datalab-notebook-container",
          "image": "jupyter/all-spark-notebook",
          "env": [],
          "ports": [{
            "containerPort": 8888,
            "name": "datalab-port"
          }],
          "volumeMounts": [{
            "name": "datalab-notebook-persistent-storage",
            "mountPath": "/home/jovyan/work"
          }]
        }],
        "volumes": [{
          "name": "datalab-notebook-persistent-storage",
          "gcePersistentDisk": {
            "pdName": "datalab-notebook-disk",
            "fsType": "ext4"
          }
        }]
      }
    }

  }
}

As you can see I mounted a Google Compute Engine Persistent Disk. My issue is that the container uses a non-root user and the mounted disk is owned by root. so my container can not write to the disk.

  • Is there a way to mount GCE persistent disks and make them read/write for containers without non-root users?
  • Another general question : is it safe to run container with root user in Google Container Engine?

Thank you in advance for your inputs

2
What would you define as safe? Because GKE gives you each VM that runs as part of the kubernetes cluster, at least it used to, not sure if that's still the case, but I believe so. So a root user container is the same as running root on your host, so if your application is fine running root normally, then you should be fineChristian Grabowski

2 Answers

13
votes

You can use the FSGroup field of the pod's security context to make GCE PDs writable by non-root users.

In this example, the gce volume will be owned by group 1234 and the container process will have 1234 in its list of supplemental groups:

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  securityContext:
    fsGroup: 1234
  containers:
  - image: gcr.io/google_containers/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /test-pd
      name: test-volume
  volumes:
  - name: test-volume
    # This GCE PD must already exist.
    gcePersistentDisk:
      pdName: my-data-disk
      fsType: ext4
2
votes

I ran into the same problem. The workaround I used was to run df -h on the host machine that the container was running on. From there I was able to find the bind point of the persistant storage. It should look something like /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/<pd-name>. It will also be one of the ones that has a file system that starts with /dev that isn't mounted to root.

Once you've found that you can run sudo chmod -R 0777 /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/<pd-name> from the host box, and now at least your container can use the directory, though the files will still be owned by root.