1
votes

I have a docker image uploaded to the GCP container registry that contains a web application running on TomEE and postgres for data persistence. I created a compute engine instance and exposed HTTPS access on port 8443 with commands like the following.

gcloud beta compute instances create-with-container MY_VM --tags MY_TAG --machine-type=n1-standard-2 --container-image IMAGE_ID
gcloud compute firewall-rules create MY_VM-8443 --allow tcp:8443 --source-ranges 0.0.0.0/0 --target-tags MY_TAG

This works to deploy the application and makes it accessible via the GCE instance's public address on port 8443, but when the GCE instance is restarted any data created by the application to the container's postgres database is lost. I would like to mount a persistent disk into the container for the postgres data so that data is not lost and be able to take snapshots for backup purposes which is supported by GCE based on what I have read of the documentation.

Based on the documentation for GCE I think this is a two step process.

  1. Create a disk and attach it to the GCE instance.
  2. Mount the attached disk to the docker container with this option when creating the GCE instance: --container-mount-host-path mount-path=/mnt/disks/appdata,host-path=/var/lib/postgresql/data,mode=rw

I am stuck on the first step. I created a disk, attached it to the GCE instance, formatted the new disk, mounted it, and updated the /etc/fstab so the the disk should be remounted at boot following the GCE "Adding or Resizing Persistent Disks" How-To. All the steps worked up to the fstab changes. The contents of /etc/fstab were not retained when I stopped and restarted the GCE instance with the commands below (tried the whole process a couple of times)...

gcloud compute instances stop MY_VM
gcloud compute instances start MY_VM

To verify I got this process correct I tried same disk setup using a non-container GCE instance created with a command like the following. This works as expected, when the instance is stopped and started the /etc/fstab is restored and the disk is mounted.

gcloud compute instances create test-host --tags test-server --machine-type=f1-micro

To be complete, here are the type of commands used to setup the attached disk.

gcloud beta compute ssh MY_VM
     sudo lsblk  # List the attached disks
     sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb
     sudo mkdir -p /mnt/disks/appdata
     sudo mount -o discard,defaults /dev/sdb /mnt/disks/appdata
     sudo chmod a+w /mnt/disks/appdata
     sudo cp /etc/fstab /etc/fstab.backup
     sudo blkid /dev/sdb  # Get the block ID for the disk
     sudo vi /etc/fstab   # Add the line below
       UUID=[UUID_VALUE] /mnt/disks/appdata ext4 discard,defaults 0 2

Is there a limitation for GCE container-based instances with /etc/fstab?

If this should work how do I debug the loss of /etc/fstab on restart?

Is there a better approach to adding persistent storage to the container running in the GCE instance?

1

1 Answers

0
votes

Since you are running a container based GCE instance, you will have to create and mount the volume a different way. It is not that there is a limitation on GCE container-based instances with /etc/fstab, it is just a different method.

To create and mount a volume in a container, you will have to use docker commands. you can create a volume by running the following command:

$docker volume create [volume_name]

I have attached an article from the official docker page on how you can create volumes in a container here. It also includes steps on how to backup and restore data volumes from a container as well. The article suggests to use volumes since it is easier to back them up.

EDIT: From my research, this seems to be an issue with Docker. You can read more about the issue here.