Bug 980558 - After upgrading RHEV-H, the host system fails to boot
After upgrading RHEV-H, the host system fails to boot
Status: CLOSED INSUFFICIENT_DATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: ovirt-node (Show other bugs)
6.4
Unspecified Unspecified
unspecified Severity high
: rc
: ---
Assigned To: Joey Boggs
Virtualization Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-02 13:24 EDT by Anitha Udgiri
Modified: 2013-12-11 00:42 EST (History)
15 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-19 12:12:36 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 144813 None None None Never

  None (edit)
Description Anitha Udgiri 2013-07-02 13:24:59 EDT
Description of problem:
 When trying to upgrade the host from RHEV Hypervisor - 6.3 - 20121212.0.el6_3 to RHEV Hypervisor - 6.4 - 20130528.0.el6_4, the hypervisor gets stuck during the start of system services.

Here is the screen shot content of the errors:
##########################

Welcome to Red Hat Enterprise Virtualization Hypervisor

Starting udev: [ OK ] 

Setting hostname localhost.localdomain: [ OK ]

Setting up Logical Volume Management: 4 logical volume(s) in volume group "HostVG" now active

44 logical volume(s) in volume group "565fac5d-393b-4bcc-8208-0290fac7d22a" now active [ OK ]

cp: '/var/lib/vdsm/' and '/var/lib/stateless/writable/var/lib/vdsm' are the same file 

Mounting local filesystems: [ OK ]

chown: invalid user: 'root:root'

Enabling /etc/fstab swaps [ OK ]

telinit: Did not receive a reply. Possible causes include the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. 

init: rcS post-stop process (2070) terminated with status 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Version-Release number of selected component (if applicable):
The user is using RHEV-M 3.2 and has used the GUI to upgrade the hypervisor


How reproducible:
Not sure

Additional Information:
~~~~~~~~~~~~~~~~~~~~~~~
THe user is unable to login to the hypervisor even in rescue mode to check on the contents of logs.
Comment 1 Longjun Song 2013-07-22 06:25:49 EDT
Step 1. Install the RHEV Hypervisor-6.3-20121212.0.el6_3 integritedly.

Step 2. Upgrade the host to RHEV Hypervisor-6.4-20130528.0.el6_4 from RHEVM 3.2(sf18)

Result: There is no this issue when the installation is integreted.

BUT After deleting the data of root in file '/etc/passwd' manaully and reboot, the problem will occur. This operation is done by myself but I don't know how to reproduce it by program. Please attach the detail steps, log files and /etc/passwd for analysis.
Comment 2 Lei Wang 2013-08-02 02:18:40 EDT
Hello Anitha, 

As mentioned in comment1, QE are not sure the correct way to reproduce this bug, though seems via removing root related data in /etc/passwd could trigger this issue. 

Would you please provide more detailed steps and logs, /etc/passwd etc for reproducing and analyzing this issue? Is it 100% reproducible or just happen under specific situation?

Thanks!
Comment 3 Anitha Udgiri 2013-08-02 12:18:10 EDT
Hello Lei Wang,
     Unfortunately, we don't have additional information to proceed further here. Customer reinstalled to get back into production.
Acording to them, they did not do anything else other than upgrade the host via RHEV-M.
We also could not get any sosreport info from the host as they were unable to access the system even in rescue mode.
Under these circumstances, it may be a good idea to close this BZ.
Comment 6 Mike Burns 2013-09-19 12:12:36 EDT
Closing since we don't have enough to debug the issue and can't reproduce.  Please re-open if more information surfaces.
Comment 9 Mark Huth 2013-12-11 00:42:37 EST
Actually I forget to mention that this customer was also upgrading their hypervisor too, which certainly matches the error scenario in comment #0.

Note You need to log in before you can comment on or make changes to this bug.