Description of problem:
(All went fine till VHD image was imported as AMI EC2 according to documentation)
Prior to starting the EC2 instance, the customer added a 50GB EBS block device as /dev/sdb, to be used as the internal database store.
However, when the instance was rebooted, it did not come back up. From the console and saw that /dev/sdb had been mounted as /mnt, so the logical volume that was created on /dev/sdb for the database could not be imported and mounted. The system did not finish booting and prompted for root password for maintenance (the console cannot be accessed interactively under Amazon AWS, only viewed).
Version-Release number of selected component (if applicable):
Steps to Reproduce:
The system attempts to mount /dev/sdb as /mnt instead of letting LVM import the LV and mounting the postgres mountpoint.
Problem can be fixed by:
* Shutting down CloudForms appliance
* Detaching /dev/sda1 EBS volume, and attaching to an existing RHEL 7 AWS EC2 instance
* Run pvscan and lvactivate -ay VG-CFME/lv_os
* mount /dev/mapper/VG--CFME-lv_os /mnt
* Remove /mnt from /mnt/etc/fstab
* Unmount /mnt and shut down system
* Reattach EBS volume as /dev/sda1 on CloudForms appliance
The culprit that causes this to happen is cloud-init. To stop it happening again, customer commented out the " - mounts" line in /etc/cloud/cloud.cfg.
You might want to rebuild the VHD image without the " -mounts" line in /etc/cloud/cloud.cfg
Feel free to correct component type(I've set it to 'build') if that is wrong.
I could reproduce and the problem is that there is in /etc/fstab
this line by default(even when having only single drive):
/dev/xvdb /mnt auto defaults,nofail,comment=cloudconfig 0 2
Verified in 184.108.40.206. Appliance deployed in aws ec2 enviroment works with database drive even after reboot.