57
votes

I had some unknown issue with my old EC2 instance so that I can't ssh into it anymore. Therefore I'm attempting to create a new EBS volume from a snapshot of the old volume and mount it into the new instance. Here is exactly what I did:

  1. Created a new volume from snapshot of the old one.
  2. Created a new EC2 instance and attached the volume to it as /dev/xvdf (or /dev/sdf)
  3. SSHed into the instance and attempted to mount the old volume with:

    $ sudo mkdir -m 000 /vol $ sudo mount /dev/xvdf /vol

And the output was:

mount: block device /dev/xvdf is write-protected, mounting read-only
mount: you must specify the filesystem type

I know I should specify the filesytem as ext4 but the volume contains a lot of important data, so I cannot afford to format it with $ sudo mkfs -t ext4 /dev/xvdf. If I try sudo mount /dev/xvdf /vol -t ext4 (no formatting) I get:

mount: wrong fs type, bad option, bad superblock on /dev/xvdf,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

And dmesg | tail gives me:

[ 1433.217915] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.222107] FAT-fs (xvdf): bogus number of reserved sectors
[ 1433.226127] FAT-fs (xvdf): Can't find a valid FAT filesystem
[ 1433.260752] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.265563] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.270477] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.274549] FAT-fs (xvdf): bogus number of reserved sectors
[ 1433.277632] FAT-fs (xvdf): Can't find a valid FAT filesystem
[ 1433.306549] ISOFS: Unable to identify CD-ROM format.
[ 2373.694570] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem

By the way, the 'mounting read-only' message also worries me but I haven't look into it yet since I can't mount the volume at all.

Thanks in advance!

10
I updated my answer. Does that work for you?FactoryAidan

10 Answers

111
votes

The One Liner


🥇 Mount the partition (if disk is partitioned):

sudo mount /dev/xvdf1 /vol -t ext4

Mount the disk (if not partitioned):

sudo mount /dev/xvdf /vol -t ext4

where:

  • /dev/xvdf is changed to the EBS Volume device being mounted
  • /vol is changed to the folder you want to mount to.
  • ext4 is the filesystem type of the volume being mounted

Common Mistakes How To:


✳️ Attached Devices List

Check your mount command for the correct EBS Volume device name and filesystem type. The following will list them all:

sudo lsblk --output NAME,TYPE,SIZE,FSTYPE,MOUNTPOINT,UUID,LABEL

If your EBS Volume displays with an attached partition, mount the partition; not the disk.


✳️ If your volume isn't listed

If it doesn't show, you didn't Attach your EBS Volume in AWS web-console


✳️ Auto Remounting on Reboot

These devices become unmounted again if the EC2 Instance ever reboots.

A way to make them mount again upon startup is to add the volume to the server's /etc/fstab file.

🔥 Caution:🔥
If you corrupt the /etc/fstab file, it will make your system unbootable. Read AWS's short article so you know to check that you did it correctly.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html#ebs-mount-after-reboot

First:
With the lsblk command above, find your volume's UUID & FSTYPE.

Second:
Keep a copy of your original fstab file.

sudo cp /etc/fstab /etc/fstab.original

Third:
Add a line for the volume in sudo nano /etc/fstab.

The fields of fstab are 'tab-separated' and each line has the following fields:

<UUID>  <MOUNTPOINT>    <FSTYPE>    defaults,discard,nofail 0   0

Here's an example to help you, my own fstab reads as follows:

LABEL=cloudimg-rootfs   /   ext4    defaults,discard,nofail 0   0
UUID=e4a4b1df-cf4a-469b-af45-89beceea5df7   /var/www-data   ext4    defaults,discard,nofail 0   0

That's it, you're done. Check for errors in your work by running:

sudo mount --all --verbose

You will see something like this if things are 👍:

/                   : ignored
/var/www-data       : already mounted
24
votes

I noticed that for some reason the volume was located at /dev/xvdf1, not /dev/xvdf.

Using

sudo mount /dev/xvdf1 /vol -t ext4

worked like a charm

22
votes

I encountered this problem too after adding a new 16GB volume and attaching it to an existing instance. First of all you need to know what disks you have present Run

  sudo fdisk -l 

You'll' have an output that appears like the one shown below detailing information about your disks (volumes"

 Disk /dev/xvda: 12.9 GB, 12884901888 bytes
  255 heads, 63 sectors/track, 1566 cylinders, total 25165824 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disk identifier: 0x00000000

Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *       16065    25157789    12570862+  83  Linux

 Disk /dev/xvdf: 17.2 GB, 17179869184 bytes
 255 heads, 63 sectors/track, 2088 cylinders, total 33554432 sectors
 Units = sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disk identifier: 0x00000000

 Disk /dev/xvdf doesn't contain a valid partition table

As you can see the newly added Disk /dev/xvdf is present. To make it available you need to create a filesystem on it and mount it to a mount point. You can achieve that with the following commands

 sudo mkfs -t ext4 /dev/xvdf

Making a new file system clears everything in the volume so do this on a fresh volume without important data

Then mount it maybe in a directory under the /mnt folder

 sudo mount /dev/xvdf /mnt/dir/

Confirm that you have mounted the volume to the instance by running

  df -h

This is what you should have

Filesystem      Size  Used Avail Use% Mounted on
 udev            486M   12K  486M   1% /dev
 tmpfs           100M  400K   99M   1% /run
 /dev/xvda1       12G  5.5G  5.7G  50% /
 none            4.0K     0  4.0K   0% /sys/fs/cgroup
 none            5.0M     0  5.0M   0% /run/lock
 none            497M     0  497M   0% /run/shm
 none            100M     0  100M   0% /run/user
 /dev/xvdf        16G   44M   15G   1% /mnt/ebs

And that's it you have the volume for use there attached to your existing instance. credit

13
votes

I encountered this problem, and I got it now,

[ec2-user@ip-172-31-63-130 ~]$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk
└─xvda1 202:1    0   8G  0 part /
xvdf    202:80   0   8G  0 disk
└─xvdf1 202:81   0   8G  0 part

You should mount the partition

/dev/xvdf1 (which type is a partition)

not mount the disk

/dev/xvdf (which type is a disk)

3
votes

I had different issue, here when I checked in dmesg logs, the issue was with same UUID of existing root volume and UUID of root volume of another ec2. So to fix this I mounted it on another Linux type of ec2. It worked.

1
votes

First run below command

lsblk /dev/xvdf

Output will be something like below

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

xvdf 202:80 0 10G 0 disk

├─xvdf1 202:81 0 1M 0 part

└─xvdf2 202:82 0 10G 0 part

Then, check the size and then mount it that one. In above cases, mount it like below

mount /dev/xvdf2 /foldername

1
votes

For me it was duplicate UUID error while mounting the volume, so I used "-o nouuid" option.

for e.g. mount -o nouuid /dev/xvdf1 /mnt

I found the clue from system logs, on CentOs, /var/log/messages and found the error: kernel: XFS (xvdf1): Filesystem has duplicate UUID f41e390f-835b-4223-a9bb-9b45984ddf8d - can't mount

0
votes

You do not need to create a file system of the newly created volume from the snapshot.simply attach the volume and mount the volume to the folder where you want. I have attached the new volume to the same location of the previously deleted volume and it was working fine.

[ec2-user@ip-x-x-x-x vol1]$ sudo lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk 
└─xvda1 202:1    0   8G  0 part /
xvdb    202:16   0  10G  0 disk /home/ec2-user/vol1
0
votes

I usually persist by pre-defining the UUID at the time of creating ext4 FS,I add a script on user data and launch the instance, works just fine without any issues:

Ex script:

#!/bin/bash
# Create the directory to be mounted
sudo mkdir -p /data
# Create file system with pre-defined & Label (edit the device name as needed)
sudo mkfs -U aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa -L DATA -t ext4 /dev/nvme1n1 

# Mount
sudo mount /dev/nvme1n1 /data -t ext4

# Update the fstab to persist after reboot
sudo su -c "echo 'UUID=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa   /data  ext4    defaults,discard,nofail 0   0' >> /etc/fstab"
0
votes

For me there was some mysterious file causing this issue.

For me I had to clear the directory using the following command.

sudo mkfs -t ext3 /dev/sdf

Warning: this might delete files you have saved. So you can run ls to make sure you don't lose important saved files