2
votes

i followed the instructions to install nvidia-docker 2 and then installed kubernetes 1.10 via kubeadm (on rhel7): i did the following:

curl -s -L https://nvidia.github.io/nvidia-docker/rhel7.4/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo
yum update

yum install docker

yum install -y nvidia-container-runtime-hook

yum install --downloadonly --downloaddir=/tmp/  nvidia-docker2-2.0.3-1.docker1.13.1.noarch nvidia-container-runtime-2.0.0-1.docker1.13.1.x86_64
rpm -Uhv --replacefiles /tmp/nvidia-container-runtime-2.0.0-1.docker1.13.1.x86_64.rpm /tmp/nvidia-docker2-2.0.3-1.docker1.13.1.noarch.rpm

mkdir -p  /etc/systemd/system/docker.service.d/
cat <<EOF > /etc/systemd/system/docker.service.d/override.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd-current --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES
EOF

cat <<EOF > /etc/docker/daemon.json
{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}
EOF

systemctl restart docker

docker run --rm nvidia/cuda nvidia-smi
# success!

i can even schedule gpu'd containers and see all gpus from within the container.

however, when i deploy a container with:

resources:
    limits:
        nvidia.com/gpu: 1

the pods remain as:

jupyter         jupyterlab-gpu                 0/1       Pending     0          1m        <none>           <none>

describe shows:

Name:         jupyterlab-gpu
Namespace:    jupyter
Node:         <none>
Labels:       app=jupyterhub
              component=singleuser-server
              heritage=jupyterhub
              hub.jupyter.org/username=me
Annotations:  <none>
Status:       Pending
IP:
Containers:
  notebook:
    Image:      slaclab/slac-jupyterlab-gpu
    Port:       8888/TCP
    Host Port:  0/TCP
    Limits:
      cpu:             2
      memory:          2147483648
      nvidia.com/gpu:  1
    Requests:
      cpu:             500m
      memory:          536870912
      nvidia.com/gpu:  1
    Environment:
      JUPYTERHUB_USER:                me
      JUPYTERLAB_IDLE_TIMEOUT:        43200
      JPY_API_TOKEN:                  1fca7b3d716e4d54a98d8054d17b16fb
      CPU_LIMIT:                      2.0
      JUPYTERHUB_SERVICE_PREFIX:      /user/me/
      MEM_GUARANTEE:                  536870912
      JUPYTERHUB_API_URL:             http://10.103.19.59:8081/hub/api
      JUPYTERHUB_OAUTH_CALLBACK_URL:  /user/me/oauth_callback
      JUPYTERHUB_BASE_URL:            /
      JUPYTERHUB_API_TOKEN:           1fca7b3d716e4d54a98d8054d17b16fb
      CPU_GUARANTEE:                  0.5
      JUPYTERHUB_CLIENT_ID:           user-me
      MEM_LIMIT:                      2147483648
      JUPYTERHUB_HOST:
    Mounts:
      /home/ from generic-user-home (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from no-api-access-please (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  generic-user-home:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  generic-user-home
    ReadOnly:   false
  no-api-access-please:
    Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
QoS Class:       Burstable
Node-Selectors:  group=gpu
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  14s (x13 over 2m)  default-scheduler  0/8 nodes are available: 1 node(s) were not ready, 6 node(s) didn't match node selector, 7 Insufficient nvidia.com/gpu.

i am able to schedule containers to the node without the gpu resource limit without issues.

is there a way i can validate that kubectl (?) can 'see' the gpus?

1
is the node marked as ready in kubectl get nodes ?jaxxstorm
yes, node is ready. but i do not see gpu resources with get nodes -o yamlyee379

1 Answers

3
votes

You can view nodes detail via kubectl get nodes -oyaml, nvidia.com/gpu resources will be listed under status.allocatable and status.capacity alongside with cpu and memory