Bug 1333569

Summary: undercloud server failed to boot and goes into emergency mode
Product: Red Hat OpenStack Reporter: bigswitch <rhosp-bugs-internal>
Component: rhosp-directorAssignee: Angus Thomas <athomas>
Status: CLOSED DUPLICATE QA Contact: Arik Chernetsky <achernet>
Severity: high Docs Contact:
Priority: unspecified    
Version: 8.0 (Liberty)CC: dbecker, mburns, morazi, rhel-osp-director-maint
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-05-06 10:57:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description bigswitch 2016-05-05 21:39:19 UTC
Description of problem:
Seen this on multiple Dell server. After installing undercloud, when the server is rebooted it goes into emergency mode. From the boot.log its timeout waiting for device to response. Setting lvm vgchange -ay mounts the three partition, but when attempting to start service the server will reboot and the console becomes unresponsive after that. Physically power cycling the box goes back into emergency mode.
I've tried both bios and uefi boot option and the problem is seen on both.
Also, I've done mutliple reboot after yum update and after installing python-tripleoclient and each time the server comes up fine. This problem happen after the undercloud is installed.

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. Install redhat enterprise x64 
2. Install rhosp 8 undercloud
3. Reboot

Comment 2 bigswitch 2016-05-06 00:01:15 UTC
Find my mistake, I should've done a systemctl default to go into default mode. However, the undercloud controller shouldnt go into emergency mode in the first place

Comment 3 Mike Burns 2016-05-06 10:57:22 UTC
I suspect this is due to bug 1323024.  If you have multiple logical volumes defined to be mounted on boot, these won't get activated.  The solution is to simply not mount them by editing /etc/fstab.  There are a couple other options listed in bug 1323024.  If these don't solve your problem, please reopen this bug.

*** This bug has been marked as a duplicate of bug 1323024 ***