i have a deployment configuration that uses a persistent volume claim using a google compute engine disk.
I noticed that when i deploy which updates the image the cluster attempts to pull in the latest image but when doing that it gets stuck with container creating status with this error:
Error from server (BadRequest): container "tita-api" in pod "tita-api-7569bd99d7-z44dg" is waiting to start: ContainerCreating. Checking further i saw the disk resource is used by another node . AttachVolume.Attach failed for volume "app-pv" : googleapi: Error 400: The disk resource 'projects/tita-canary/zones/us-central1-a/disks/app-disk' is already being used by 'projects/tita-canary/zones/us-central1-a/instances/gke-tita-staging-default-pool-2cae0006-sxgk'
I am using 1.8 kubernetes and currently what i did was to change my deployment strategy to recreate this works but takes a bit of time to update the pods .I would really love this to work using rolling update strategy .
kubectl describe pod tita-api-7569bd99d7-z44dg? - Robert