1
votes

I screwed up root volume of my system in ec2 instance so I attached root volume of the instance to other ec2 instance so that I could access the bad root volume and rectify my error. When I start the other instance, the screwed up root volume becomes the root volume of the instance. I attached the volume as /dev/sdb (kernel changed it to /dev/xvdf ) and the instance original root volume is at /dev/sda (kernel changed it to /dev/xvde ). So kernel should load /dev/xvde as root filesystem but its loading scrwed up root volume (/dev/xvdf) .

The snippet of system logs of the system is as following:

dracut: Starting plymouth daemon

xlblk_init: register_blkdev major: 202

blkfront: xvdf: barriers disabled

xvdf: unknown partition table

blkfront: xvde: barriers disabled

xvde: unknown partition table

EXT4-fs (xvdf): mounted filesystem with ordered data mode. Opts:

dracut: Mounted root filesystem /dev/xvdf

4

4 Answers

7
votes

OR

The simple way is to attach Centos root volume to a amazon linux machine and fix the issue. Don't attach Centos root volume to another ec2 instance running Centos. Centos in AWS marketplace have "centos" as label for root volume . So when we attach centos root volume to another centos machine, AWS gets confused as to which root volume to mount and anomaly happens.

1
votes

As the screwed up root volume and the original instance root volume has the same label attached to the volume partition (in my case my OS is centos6.5 and the label is centos_root ) , so we have to change the label of our instance so that next time it boots it doesn't look for label centos_root and instead it will look for our changed label.

First, change the root volume partition label by the command ex. e2label /dev/xvde your_label , here /dev/xvde is the root partition

Second, change the label in "/etc/fstab and /boot/grub/grub.conf" with your_label.

Third, Stop the instance

Fourth, Attched the screwed up root volume to the instance

Fifth, Start the instance

Sixth, Voila now you can see the screwed up root volume partition and mount it to some mount point to fix your issue .

0
votes

deattach the "screwed up" volume from the other ec2 instance

boot the other instance normally

attach the EBS to the running instance see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-attaching-volume.html

do fdisk -l as root and find the device name of the new instance

make a "mount point" (a directory) and mount the desired disk partition on it

once it is fixed, use the umount command on the mount point and then deattach the volume

If the AMI has a marketplace code try the steps given in this answer https://serverfault.com/questions/522173/aws-vol-xxxxxxx-with-marketplace-codes-may-not-be-attached-as-as-secondary-dev

0
votes

PSA: don't use CentOS in AWS.

You can no longer attach a root volume for a CentOS instance to another instance. This is by design, to prevent people from circumventing licensing agreements. Even though CentOS is technically free, because it's a marketplace AMI, the rule applies. It's a good rule in general, but it makes recovery of a failed configuration impossible.

Use Amazon Linux. It's basically CentOS anyways.