Bug 1248214

Summary: Engine is not suggesting an upgrade, even if there is a new iso available
Product: Red Hat Enterprise Virtualization Manager Reporter: Robert McSwain <rmcswain>
Component: ovirt-engineAssignee: Moti Asayag <masayag>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Pavol Brilla <pbrilla>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.5.3CC: adevolder, dougsland, ecohen, fdeutsch, lpeer, lsurette, pstehlik, rbalakri, Rhev-m-bugs, rmcswain, yeylon
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard: infra
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-09-09 07:57:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Infra RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Robert McSwain 2015-07-29 21:14:32 UTC
Description of problem:

"I cannot upgrade my hypervisors from the current running version (RHEV Hypervisor - 6.6 - 20150128.0.el6ev) to the latest version (RHEV Hypervisor - 6.6 - 20150603.0.el6ev).

When a host is selected from the administration portal, the "Action Items" reads "A new version is available; an upgrade option will appear once the Host is moved to maintenance mode."

The host is *in* maintenance mode when this occurs. I have also tried writing the ISO to USB and booting via USB to manually execute the upgrade. When I boot, I am only presented with the option to "Reinstall 6.6-20150128.0.el6ev". (I have attached a photo of the install screen. Note the version at the top versus the option I am presented with.)

Getting this update installed is extremely important as it mitigates CVE-2015-3456 (aka VENOM)."

Version-Release number of selected component (if applicable):
RHEV Hypervisor - 6.6 - 20150603.0.el6ev


How reproducible:
Unknown

Additional Information:
Opened from https://bugzilla.redhat.com/show_bug.cgi?id=1236738
Data will be provided in a following update

Comment 2 Robert McSwain 2015-07-31 20:50:23 UTC
I've marked this as Urgent primarily because of the customer's inability to roll out a fresh build of a hypervisor with the VENOM fix built in. Even if they perform a fresh installation after I having re-initialized the RAID array holding the system data. He attempted this when trying to follow the instructions here:


1. Boot new RHEV-H ISO
2. Wait until (a) either the upgrade TUI appears or (b) the login prompt of the previous/existing RHEV-H appears
3.(a) in case of (a) the upgrade should work now
3.(b) Login and drop to shell by pressing 'F2'
4. Create a tarball from /etc, /var/log, /config, and /proc and provide an sosreport

Comment 5 Fabian Deutsch 2015-08-03 22:14:53 UTC
In /var/log/ovirt-node.log I see:

CalledProcessError: Command '['lvm', 'vgs', '--noheadings', '-o', 'pv_name', u'Found duplicate PV XXXXXXXXXX: using /dev/sdb2 not /dev/sda2']' returned non-zero exit status 3

This could indicate that multipath is not working correctly, (/dev/sdb2 and /dev/sda2 should get assembled into the same LV, but that is not the case).

Robert, we can try a few things:

1. When booting the upgrade ISO, please remove the rd_NO_MULTIPATH keyword (hit <Tab> on the "Install or Upgrade" syslinux entry and then remove this keyword).

2. Set mpath.wwid=<wwid of disk> on the kernel commandline when booting
   You can find out the relevant wwid by running multipath -ll on a normal boot.

Comment 7 Fabian Deutsch 2015-08-03 22:49:03 UTC
The info from comment 5 is rather interesting for bug 1236738.

Robert, in the description, the customers says that he is told that an update is available, but is there actually an update available in the Web admin when he right clicks on the host while it is in maintenance?

Comment 14 Red Hat Bugzilla 2023-09-14 03:02:46 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days