1
votes

I am currently using AWS EC2 Autoscaling group for my application. I am using AWS EFS for my file system. I have around 5 EFS filesystems with auto mount enabled while launching new instances.

I am getting some weird issue some times when auto scaling launches new instances at the time of scale-up. In some instances, Some EFS won't be mounted. It happens very rare but damages my production system after that happens.

From my understanding, it looks like system is trying to mount EFS before the networking came online in newly launched EC2 instances. Here is the error log:

Dec  13 01:09:08 ip-xxxxxxxxxxxxxx systemd[1]: var-www-html-mcstaging.xxxxxxxx-app-etc.mount: Mount process exited, code=exited, status=1/FAILURE
Dec  13 01:09:08 ip-xxxxxxxxxxxxxx systemd[1]: var-www-html-mcstaging.xxxxxxxxxxxx-app-etc.mount: Failed with result 'exit-code'.
Dec  13 01:09:08 ip-xxxxxxxxxxxxxx systemd[1]: Failed to mount /var/www/html/mcstaging.xxxxxxxxxxx/app/etc.
Dec  13 01:09:08 ip-xxxxxxxxxxxxxx systemd[1]: Dependency failed for Remote File Systems.
Dec  13 01:09:08 ip-xxxxxxxxxxxxxx systemd[1]: remote-fs.target: Job remote-fs.target/start failed with result 'dependency'.
1
What is your setup? Do you use EFS provisioner of some kind? Please provide some logs of the failure.gp42
Hi @gp42, I have updated the error log above. EFS is the managed file system service provided by AWS.Deependra Dangal
So I assume you are mounting it as an nfs volume in the pod (it is also possible to mount efs as a pv). What OS are you running on kubernetes nodes?gp42
I am not using kubernetes. I am installing my application on ec2 and using efs.Deependra Dangal
:) anyway what is the OS?gp42

1 Answers

0
votes

Add _netdev to the mount options in /etc/fstab. I believe this was already answered here.