24
votes

I'm running an AWS EC2 Ubuntu instance with EBS storage initially of 8GB.

This is now 99.8% full, so I've followed AWS documentation instructions to increase the EBS volume to 16GB. I now need to extend my partition /dev/xvda1 to 16GB, but when I run the command

$ growpart /dev/xvda 1

I get the error

mkdir: cannot create directory ‘/tmp/growpart.2626’: No space left on device

I have tried

  1. rebooting the instance
  2. stopping the instance, and mounting a newly created EBS volume of size 16GB based on a snapshot of the old 8GB volume
  3. running docker system prune -a (resulting in a "Cannot connect to the Docker daemon at unix:/var/run/docker.sock. Is the docker daemon running?" error. When I try to start the daemon using sudo dockerd, I get a "no space left on device" error as well)
  4. running resize2fs /dev/xvda1

all to no avail.

Running lsblk returns

NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0     7:0    0   89M  1 loop /snap/core/7713
loop1     7:1    0   18M  1 loop /snap/amazon-ssm-agent/1480
loop2     7:2    0 89.1M  1 loop /snap/core/7917
loop3     7:3    0   18M  1 loop /snap/amazon-ssm-agent/1455
xvda    202:0    0   16G  0 disk
└─xvda1 202:1    0    8G  0 part /

df -h returns

Filesystem      Size  Used Avail Use% Mounted on
udev            2.0G     0  2.0G   0% /dev
tmpfs           395M   16M  379M   4% /run
/dev/xvda1      7.7G  7.7G     0 100% /
tmpfs           2.0G     0  2.0G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/loop0       90M   90M     0 100% /snap/core/7713
/dev/loop1       18M   18M     0 100% /snap/amazon-ssm-agent/1480
/dev/loop2       90M   90M     0 100% /snap/core/7917
/dev/loop3       18M   18M     0 100% /snap/amazon-ssm-agent/1455
tmpfs           395M     0  395M   0% /run/user/1000

and df -i returns

Filesystem      Inodes  IUsed  IFree IUse% Mounted on
udev            501743    296 501447    1% /dev
tmpfs           504775    457 504318    1% /run
/dev/xvda1     1024000 421259 602741   42% /
tmpfs           504775      1 504774    1% /dev/shm
tmpfs           504775      3 504772    1% /run/lock
tmpfs           504775     18 504757    1% /sys/fs/cgroup
/dev/loop0       12827  12827      0  100% /snap/core/7713
/dev/loop1          15     15      0  100% /snap/amazon-ssm-agent/1480
/dev/loop2       12829  12829      0  100% /snap/core/7917
/dev/loop3          15     15      0  100% /snap/amazon-ssm-agent/1455
tmpfs           504775     10 504765    1% /run/user/1000
4
Did you try it as a root user? After getting this error I used sudo su and follow this process docs.aws.amazon.com/AWSEC2/latest/UserGuide/… - Emran

4 Answers

80
votes

For anyone that has this problem, here's a link to the answer: https://aws.amazon.com/premiumsupport/knowledge-center/ebs-volume-size-increase/

Summary

  1. Run df -h to verify your root partition is full (100%)
  2. Run lsblk and then lsblk -f to get block device details
  3. sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp
  4. sudo growpart /dev/DEVICE_ID PARTITION_NUMBER
  5. lsblk to verify partition has expanded
  6. sudo resize2fs /dev/DEVICE_IDPARTITION_NUMBER
  7. Run df -h to verify your resized disk
  8. sudo umount /tmp
4
votes

I came across this article http://www.daniloaz.com/en/partitioning-and-resizing-the-ebs-root-volume-of-an-aws-ec2-instance/ and solved it with ideas from there.

Steps taken:

  1. Note down root device (e.g. /dev/sda1)
  2. Stop instance
  3. Detach root EBS volume and then modify volume size if you haven't already
  4. Create an auxiliary instance (e.g. a t2.micro instance, or use an existing one if you wish)
  5. Attach the volume from step 2 to the auxiliary instance (doesn't matter which device)
  6. In the auxiliary instance, run lsblk to ensure the volume has been mounted correctly
  7. sudo growpart /dev/xvdf 1 (or similar, to expand the partition)
  8. lsblk to check that the partition has grown
  9. Detach the volume
  10. Attach the volume to your original instance, with device set to the one you noted down in Step 1
  11. Start the instance and then SSH into it
  12. If you still get the message "Usage of /: 99.8% of X.XX GB", run df -h to check the size of your root volume partition (e.g. /dev/xvda1)
  13. Run sudo resize2fs /dev/xvda1 (or similar) to resize your partition
  14. Run df -h to check that your Use% of /dev/xvda1 is no longer ~100%
2
votes

Firstly I delete cache and unnecessary files.

sudo apt-get autoclean
sudo apt-get autoremove

After that, I follow this blog:

https://devopsmyway.com/how-to-extend-aws-ebs-volume-with-zero-downtime/

1
votes

Just make sure to clear tmp folders before running the command growpart /dev/xvda 1 by running this other command sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp that should do the trick.

Here is the full recap on resizing EBS volume:

Run df -h to verify your disk is full (100%)

/dev/xvda1 8.0G 8.0G 20K 100% /

Run lsblk

NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  20G  0 disk
`-xvda1 202:1    0   8G  0 part /

Clear tmp folders a little bit

sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp

And finally increase space in the partition

sudo growpart /dev/xvda 1

CHANGED: partition=1 start=4096 old: size=16773087 end=16777183 new: size=41938911 end=41943007

And finally do a sudo reboot wait for the instance to fully reload, ssh into the instance and run df -h should show the new space added:

/dev/xvda1       20G  8.1G   12G  41% /

Notice the new available space, and see how it's not full anymore (not at 100% now it's at 41%)