+++ This bug is an upstream to downstream clone. The original bug is: +++ +++ bug 1426172 +++ ====================================================================== Description of problem: After register to RHEVM, when upgrade from wrapper to wrapper, the latest build boot entry miss and could not enter new RHVH. Version-Release number of selected component (if applicable): RHVH-4.0-20160919.1-RHVH-x86_64-dvd1.iso (first build) redhat-virtualization-host-image-update-4.0-20161116.1.el7_3.noarch.rpm (second build) redhat-virtualization-host-4.0-20170222.0.x86_64(new build) kernel-3.10.0-514.6.2.el7.x86_64 imgbased-0.8.13-0.1.el7ev.noarch How reproducible: 100% Steps to Reproduce: 1. Install RHVH-4.0-20160919.1-RHVH-x86_64-dvd1.iso 2. Reboot into new system and register to RHEVM 3. set local repo and run "yum install redhat-virtualization-host-image-update-4.0-20161116.1.el7_3.noarch.rpm" 4. Reboot to second build and run "yum update" 5. Reboot into new RHVH Actual results: After step 5, new build boot entry miss and fails to log into new system Expected results: After step 5, new build entry will display and successfully log into new system Additional info: 1. It could work when upgrade RHVH-4.0-20160919.1-RHVH-x86_64 to latest one directly 2. Without register to engine, it is ok from wrapper to wrapper (Originally by Jian Wu)
Created attachment 1256870 [details] grub file (Originally by Jian Wu)
Created attachment 1256873 [details] picture 1 for details (Originally by Jian Wu)
Created attachment 1256874 [details] picture 2 for details (Originally by Jian Wu)
Created attachment 1256875 [details] 0222_update_log1 (Originally by Jian Wu)
Hi, Ryan I have emailed to you about this bug's ENV, and because of sosreport too big, so I have copied it to our local nfs storage. Jiawu Thanks (Originally by Jian Wu)
Thanks for the update - It's very interesting that it works without registering. The test environment had devices missing from /dev/mapper even though they were present in `lvs` (an issue which was reported separately by RHV QE last week, though only one case and I couldn't reproduce). I wonder if engine is doing something here. There is no boot entry because the upgrade process failed (/dev/mapper/rhvh-4.0-0.20160919.0+1 could not be mounted, because no device existed for it even though LVM could see it) Re-activating all the LVs before updating should fix here. (Originally by Ryan Barry)
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release. (Originally by rule-engine)
Hi, I have verify this bug on rhvh-4.0-0.20170302.0+1 Version-Release number of selected component (if applicable): RHVH-4.0-20160919.1-RHVH-x86_64-dvd1.iso (first build) redhat-virtualization-host-image-update-4.0-20161116.1.el7_3.noarch.rpm (second build) redhat-virtualization-host-4.0-20170302.0.x86_64(new build) imgbased-0.8.15-0.1.el7ev.noarch kernel-3.10.0-514.10.2.el7.x86_64 Steps to test: Steps to Reproduce: 1. Install RHVH-4.0-20160919.1-RHVH-x86_64-dvd1.iso 2. Reboot into new system and register to RHEVM 3. set local repo and run "yum install redhat-virtualization-host-image-update-4.0-20161116.1.el7_3.noarch.rpm" 4. Reboot to second build and run "yum update" 5. Reboot into new RHVH Actual results: After step 5, new build entry will display and successfully log into new system So I think this bug is fixed, I will change status to VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0549.html