Bug 1445920

Summary: [downstream clone - 4.1.2] Host unavailable after upgrade from RHEL-H 7.3 (RHEV 3.6) upgrade to RHEL-H 7.3 (RHV 4.0)
Product: Red Hat Enterprise Virtualization Manager Reporter: rhev-integ
Component: vdsmAssignee: Yevgeny Zaspitsky <yzaspits>
Status: CLOSED ERRATA QA Contact: Michael Burman <mburman>
Severity: high Docs Contact:
Priority: high    
Version: 3.6.10CC: adumitru, bazulay, cshao, danken, dougsland, eedri, huzhao, lsurette, mperina, myakove, oourfali, srevivo, ycui, ykaul, ylavi, yzaspits
Target Milestone: ovirt-4.1.2Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1431188 Environment:
Last Closed: 2017-05-24 11:25:08 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Network RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1431188    
Bug Blocks:    

Description rhev-integ 2017-04-26 19:05:25 UTC
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1431188 +++
======================================================================

Description of problem:

Version-Release number of selected component (if applicable):

RHEVM 4.0.6.3-0.1.el7
RHEL-H 7.3 on 3.6


How reproducible:

- Add the repositories in the hypervisor for 4.0 upgrade
- Click Upgrade button in the Engine 
- Hosts will be unavailable, lost network and had to redeploy

(Originally by dougsland)

Comment 4 rhev-integ 2017-04-26 19:05:47 UTC
Douglas, could you please provide both engine and VDSM logs?

(Originally by Martin Perina)

Comment 7 rhev-integ 2017-04-26 19:06:05 UTC
Adrian, Douglas, could you attach sosreport from the host *prior* to the redeployment? This bug should be dedicated to the first failed upgrade. comment  5 analyses another problem with the redeployed host.

(Originally by danken)

Comment 13 rhev-integ 2017-04-26 19:06:42 UTC
This probably shares an underlying reason with bug 1431228: ActivateVdsCommand takes unbearably long time to finish.

(Originally by danken)

Comment 14 rhev-integ 2017-04-26 19:06:47 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[FOUND CLONE FLAGS: ['rhevm-4.1.z', 'rhevm-4.2-ga'], ]

For more info please contact: rhv-devops

(Originally by rhev-integ)

Comment 15 Michael Burman 2017-04-27 08:47:47 UTC
Hi Douglas, 


- Add the repositories in the hypervisor for 4.0 upgrade
- Click Upgrade button in the Engine

If i understand it correctly, the rhv-3.6 you used is the old node rhv-h and not ngn rhv-h 3.6, correct?

As the ngn rhv-h 3.6 don't invoke the 'Upgrade' button in the engine, only old node rhv-h. 
The new ngn 3.6 rhv-h is updated when adding new repos to the host, then rebooting it from the new OS and host is up in the engine(without the upgrade button).

Dan, 

BTW, i'm not even sure how this can be tested on 4.1.2(as the original report is for engine 4.0.6 and this is targeted for 4.1.2).

Problem is that you can't add rhv-h 3.6 or rhv-4.0 to engine 4.1.2, even it's for clusters 3.6 and 4.0. not sure how this report should be tested.

Another thing, this bug targeted to 4.1.2, but still on modified.

Comment 17 Dan Kenigsberg 2017-04-27 11:50:19 UTC
This bug should better be added manually to the engine 4.1.2 erratum (I've ask Dusan to do so).

It is about performance so it does not really matter which rhv-h-ngn you are adding; try rhv-h-ngn-4.1.2.

Comment 18 Douglas Schilling Landgraf 2017-04-27 16:35:25 UTC
Hi Michael,

(In reply to Michael Burman from comment #15)
> Hi Douglas, 
> 
> 
> - Add the repositories in the hypervisor for 4.0 upgrade
> - Click Upgrade button in the Engine
> 
> If i understand it correctly, the rhv-3.6 you used is the old node rhv-h and
> not ngn rhv-h 3.6, correct?

Adrian's (adumitru) environment was RHEL-H 3.6 not RHEV-H. 

> 
> As the ngn rhv-h 3.6 don't invoke the 'Upgrade' button in the engine, only
> old node rhv-h. 
> The new ngn 3.6 rhv-h is updated when adding new repos to the host, then
> rebooting it from the new OS and host is up in the engine(without the
> upgrade button).
> 
> Dan, 
> 
> BTW, i'm not even sure how this can be tested on 4.1.2(as the original
> report is for engine 4.0.6 and this is targeted for 4.1.2).
> 
> Problem is that you can't add rhv-h 3.6 or rhv-4.0 to engine 4.1.2, even
> it's for clusters 3.6 and 4.0. not sure how this report should be tested.
> 
> Another thing, this bug targeted to 4.1.2, but still on modified.

Comment 19 rhev-integ 2017-05-05 09:25:56 UTC
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[NO RELEVANT PATCHES FOUND]

For more info please contact: rhv-devops

Comment 20 Yevgeny Zaspitsky 2017-05-05 09:45:34 UTC
All relevant patches are reported on BZ#1431228, which this patch is depends on transitively.

Comment 21 Michael Burman 2017-05-08 06:48:54 UTC
Verified on -  4.1.2.1-0.1.el7

Comment 23 errata-xmlrpc 2017-05-24 11:25:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1281