Red Hat Bugzilla – Bug 1248214
Engine is not suggesting an upgrade, even if there is a new iso available
Last modified: 2016-02-10 14:32:32 EST
Description of problem:
"I cannot upgrade my hypervisors from the current running version (RHEV Hypervisor - 6.6 - 20150128.0.el6ev) to the latest version (RHEV Hypervisor - 6.6 - 20150603.0.el6ev).
When a host is selected from the administration portal, the "Action Items" reads "A new version is available; an upgrade option will appear once the Host is moved to maintenance mode."
The host is *in* maintenance mode when this occurs. I have also tried writing the ISO to USB and booting via USB to manually execute the upgrade. When I boot, I am only presented with the option to "Reinstall 6.6-20150128.0.el6ev". (I have attached a photo of the install screen. Note the version at the top versus the option I am presented with.)
Getting this update installed is extremely important as it mitigates CVE-2015-3456 (aka VENOM)."
Version-Release number of selected component (if applicable):
RHEV Hypervisor - 6.6 - 20150603.0.el6ev
Opened from https://bugzilla.redhat.com/show_bug.cgi?id=1236738
Data will be provided in a following update
I've marked this as Urgent primarily because of the customer's inability to roll out a fresh build of a hypervisor with the VENOM fix built in. Even if they perform a fresh installation after I having re-initialized the RAID array holding the system data. He attempted this when trying to follow the instructions here:
1. Boot new RHEV-H ISO
2. Wait until (a) either the upgrade TUI appears or (b) the login prompt of the previous/existing RHEV-H appears
3.(a) in case of (a) the upgrade should work now
3.(b) Login and drop to shell by pressing 'F2'
4. Create a tarball from /etc, /var/log, /config, and /proc and provide an sosreport
In /var/log/ovirt-node.log I see:
CalledProcessError: Command '['lvm', 'vgs', '--noheadings', '-o', 'pv_name', u'Found duplicate PV XXXXXXXXXX: using /dev/sdb2 not /dev/sda2']' returned non-zero exit status 3
This could indicate that multipath is not working correctly, (/dev/sdb2 and /dev/sda2 should get assembled into the same LV, but that is not the case).
Robert, we can try a few things:
1. When booting the upgrade ISO, please remove the rd_NO_MULTIPATH keyword (hit <Tab> on the "Install or Upgrade" syslinux entry and then remove this keyword).
2. Set mpath.wwid=<wwid of disk> on the kernel commandline when booting
You can find out the relevant wwid by running multipath -ll on a normal boot.
The info from comment 5 is rather interesting for bug 1236738.
Robert, in the description, the customers says that he is told that an update is available, but is there actually an update available in the Web admin when he right clicks on the host while it is in maintenance?