Bug 1356198

Summary: [Docs] Must specify that changing cluster compat mode level (CL) to 3.6 requires VMs shut down first
Product: Red Hat Enterprise Virtualization Manager Reporter: Marina Kalinin <mkalinin>
Component: DocumentationAssignee: rhev-docs <rhev-docs>
Status: CLOSED DUPLICATE QA Contact: rhev-docs <rhev-docs>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 3.6.7CC: dmoessne, gklein, lsurette, michal.skrivanek, mkalinin, rbalakri, sites-redhat, srevivo, ykaul, ylavi
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-07-19 11:28:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Docs RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Marina Kalinin 2016-07-13 16:14:14 UTC
Due to bug#1341023, in order to change CL to 3.6, all the VMs are required to be shut down. This is a different behavior from before, that's why we need to have this requirement clearly document for the customers.

Where should we document this?

I think this chapter should mention it:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Upgrade_Guide/Upgrading_a_Red_Hat_Enterprise_Linux_6_Cluster_to_Red_Hat_Enterprise_Linux_7.html

Right now I didn't find anywhere where we mention that cluster compatibility mode should be changed at all. We should mention it and we should mention also the requirement that the VMs should be shutdown. + mention the reason for this as described in the doc text in the original bug.

Comment 2 Carl Thompson 2016-07-13 19:11:41 UTC
As an admin I strongly disagree with the "solution" that all VMs be required to be shut down for a cluster upgrade. In many (most?) real-world clusters it is neither operationally nor politically feasible to shut down all VMs in the cluster at the same time! Further, it wouldn't work as a solution anyway; the HE VM _can't_ be shut down and have the cluster still be in a state where you could change the CV (outside of manually manipulating the DB directly). Please reconsider this!

Thanks,
Carl

Comment 3 Michal Skrivanek 2016-07-14 10:00:46 UTC
(In reply to Carl Thompson from comment #2)
> As an admin I strongly disagree with the "solution" that all VMs be required
> to be shut down for a cluster upgrade.

It's not a question of a solution or a fix. This is an inherent property of using a "cluster level" to define a per-release set of features and functionalities applicable to VMs running in such a cluster

> In many (most?) real-world clusters
> it is neither operationally nor politically feasible to shut down all VMs in
> the cluster at the same time!

Right, and for that a bug 1348907 was implemented. It's not a perfect fix but unfortunately it's the best we can do in 3.6 as the proper way (to support VMs with different set of features within the same cluster) is missing infrastructure to do that. It will be possible from 4.0 onwards.

> Further, it wouldn't work as a solution
> anyway; the HE VM _can't_ be shut down and have the cluster still be in a
> state where you could change the CV (outside of manually manipulating the DB
> directly). Please reconsider this!

Bug 1351533 is specifically about Hosted Engine

> 
> Thanks,
> Carl

Comment 5 Michal Skrivanek 2016-07-14 12:35:05 UTC
(In reply to Marina from comment #0)
> 
> Right now I didn't find anywhere where we mention that cluster compatibility
> mode should be changed at all. We should mention it and we should mention


it is described in https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Upgrade_Guide/chap-Post-Upgrade_Tasks.html#Changing_the_Cluster_Compatibility_Version

but without going into much details about what it actually means, that you e.g. can't resume your snapshots with memory done in previous CL, etc.

Comment 7 Yaniv Lavi 2016-07-19 11:28:12 UTC

*** This bug has been marked as a duplicate of bug 1348907 ***