Bug 1248214 - Engine is not suggesting an upgrade, even if there is a new iso available [NEEDINFO]
Engine is not suggesting an upgrade, even if there is a new iso available
Status: CLOSED INSUFFICIENT_DATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.5.3
x86_64 Linux
unspecified Severity medium
: ---
: ---
Assigned To: Moti Asayag
Pavol Brilla
infra
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-07-29 17:14 EDT by Robert McSwain
Modified: 2016-02-10 14:32 EST (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-09-09 03:57:37 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Infra
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
dougsland: needinfo? (rmcswain)


Attachments (Terms of Use)

  None (edit)
Description Robert McSwain 2015-07-29 17:14:32 EDT
Description of problem:

"I cannot upgrade my hypervisors from the current running version (RHEV Hypervisor - 6.6 - 20150128.0.el6ev) to the latest version (RHEV Hypervisor - 6.6 - 20150603.0.el6ev).

When a host is selected from the administration portal, the "Action Items" reads "A new version is available; an upgrade option will appear once the Host is moved to maintenance mode."

The host is *in* maintenance mode when this occurs. I have also tried writing the ISO to USB and booting via USB to manually execute the upgrade. When I boot, I am only presented with the option to "Reinstall 6.6-20150128.0.el6ev". (I have attached a photo of the install screen. Note the version at the top versus the option I am presented with.)

Getting this update installed is extremely important as it mitigates CVE-2015-3456 (aka VENOM)."

Version-Release number of selected component (if applicable):
RHEV Hypervisor - 6.6 - 20150603.0.el6ev


How reproducible:
Unknown

Additional Information:
Opened from https://bugzilla.redhat.com/show_bug.cgi?id=1236738
Data will be provided in a following update
Comment 2 Robert McSwain 2015-07-31 16:50:23 EDT
I've marked this as Urgent primarily because of the customer's inability to roll out a fresh build of a hypervisor with the VENOM fix built in. Even if they perform a fresh installation after I having re-initialized the RAID array holding the system data. He attempted this when trying to follow the instructions here:


1. Boot new RHEV-H ISO
2. Wait until (a) either the upgrade TUI appears or (b) the login prompt of the previous/existing RHEV-H appears
3.(a) in case of (a) the upgrade should work now
3.(b) Login and drop to shell by pressing 'F2'
4. Create a tarball from /etc, /var/log, /config, and /proc and provide an sosreport
Comment 5 Fabian Deutsch 2015-08-03 18:14:53 EDT
In /var/log/ovirt-node.log I see:

CalledProcessError: Command '['lvm', 'vgs', '--noheadings', '-o', 'pv_name', u'Found duplicate PV XXXXXXXXXX: using /dev/sdb2 not /dev/sda2']' returned non-zero exit status 3

This could indicate that multipath is not working correctly, (/dev/sdb2 and /dev/sda2 should get assembled into the same LV, but that is not the case).

Robert, we can try a few things:

1. When booting the upgrade ISO, please remove the rd_NO_MULTIPATH keyword (hit <Tab> on the "Install or Upgrade" syslinux entry and then remove this keyword).

2. Set mpath.wwid=<wwid of disk> on the kernel commandline when booting
   You can find out the relevant wwid by running multipath -ll on a normal boot.
Comment 7 Fabian Deutsch 2015-08-03 18:49:03 EDT
The info from comment 5 is rather interesting for bug 1236738.

Robert, in the description, the customers says that he is told that an update is available, but is there actually an update available in the Web admin when he right clicks on the host while it is in maintenance?

Note You need to log in before you can comment on or make changes to this bug.