Bug 968178

Summary: [RHEVM-RHS] Should check for gluster capabilities when moving host from virt to gluster cluster
Product: Red Hat Enterprise Virtualization Manager Reporter: Sahina Bose <sabose>
Component: ovirt-engine-webadmin-portalAssignee: Sahina Bose <sabose>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.2.0CC: acathrow, ecohen, hchiramm, iheim, jkt, Rhev-m-bugs, sabose, sasundar, scohen, sdharane
Target Milestone: ---Flags: scohen: Triaged+
Target Release: 3.3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: gluster
Fixed In Version: is26 Doc Type: Bug Fix
Doc Text:
Previously when a host was moved from a Virt cluster to a Gluster cluster, there was no check for Gluster capabilities on the host. Now, during a cluster change the host is checked for Gluster capabilities, and if it fails the host is not activated. The VDSM dictionary is also updated, so service monitoring strategies are updated.
Story Points: ---
Clone Of: Environment:
virt rhev integration
Last Closed: 2014-01-21 17:24:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Gluster RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1044030    

Description Sahina Bose 2013-05-29 07:34:45 UTC
Description of problem:
When host is moved from a virt cluster to a gluster cluster, there is no check for gluster capabilities on host.

Version-Release number of selected component (if applicable):
RHEV-M 3.2

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:

The host state is UP if moved from a virt cluster to an empty gluster cluster/


Expected results:

When a RHEV-H node is moved from a virt cluster to a gluster cluster, it should be non-operational as the node has no required packages.

Additional info:
Currently, if the host is moved to a gluster cluster with other hosts, the status goes to Non-Operational as the gluster peer probe fails on the newly added RHEV-H host.
But gluster peer probe is not performed if being added to an empty cluster.

Comment 1 Itamar Heim 2013-05-29 09:43:33 UTC
*** Bug 965068 has been marked as a duplicate of this bug. ***

Comment 2 Sahina Bose 2013-08-26 10:18:10 UTC
Added code to execute Gluster peer list when activating a host, as per patch attached.
This will ensure that host state is set to Non Operational if gluster is not installed.

Comment 3 SATHEESARAN 2013-11-27 15:15:07 UTC
Tested with RHEVM IS25 [3.3.0-0.37.beta1.el6ev] and performed the following steps

1. Created a 3.2 compatible Datacenter
2. Created a 3.2 compatible gluster enabled cluster
3. Created a 3.2 compatible virt cluster
4. Added RHEL 6.5 Node to virt cluster
5. Once this node is up, moved it to MAINTENANCE state
6. Moved this node to gluster cluster
7. Brought up the node

Result - The node is shown UP against expectation of showing as NON-OPERATIONAL.

Based on this , moving this bug as ASSIGNED.

Comment 4 Sahina Bose 2013-11-29 09:09:32 UTC
Issue was that though host was moved to a gluster cluster, the Virt monitoring strategy was used. Posted a patch http://gerrit.ovirt.org/#/c/21855/ to fix this.

Comment 5 SATHEESARAN 2013-12-12 16:07:06 UTC
Tested with RHEVM IS26 [3.3.0-0.38.rc.el6ev] and performed the following steps

1. Created a 3.2 compatible POSIXFS Datacenter
2. Created a 3.2 compatible gluster enabled cluster ( note that this cluster is empty and contains no previously added RHSS Nodes )
3. Created a 3.2 compatible virt cluster
4. Added RHEL 6.5 Node to virt cluster
5. Once this node is up, moved it to MAINTENANCE state
6. Moved this node to gluster cluster
7. Brought up the node

Repeated the above steps for 3.3 datacenter too.

As expected, the node is shown as NON-OPERATIONAL in RHEVM UI as its trying to execute gluster command on that node (RHEL 6.5)

In the case, where gluster cluster previously contains RHSS Nodes, then editing the virt host with gluster cluster itself throws error as follows :
"Error while executing action ChangeVDSCluster: GlusterAddHostFailed "


Concern here is wrt comment 1, expected result is that, the non-gluster host   when its added to gluster cluster, should go NON-OPERATIONAL.

But here are 2 outcomes based on whether the gluster-cluster already contains RHSS Nodes or not.

1. When the gluster cluster is empty, adding the non-gluster host makes it go to NON-OPERATIONAL state
2. When the gluster cluster is non-empty, adding the non-gluster host itself is denied with the error message, "Error while executing action ChangeVDSCluster: GlusterAddHostFailed"

Is that the expected behavior ?
I could move it to VERIFIED state, once I get required info

Comment 6 Sahina Bose 2013-12-13 11:05:37 UTC
Sas,

Yes - when host is added to an empty cluster - we only check that gluster is installed on host.

When host is added to a cluster with existing hosts, we try to peer probe the newly added host - and this is when it fails to add.

This is expected behaviour.

Comment 7 SATHEESARAN 2013-12-13 11:47:25 UTC
Thanks Sahina for your comment6

Marking this bug as VERIFIED with verification details available in comment5

Comment 8 errata-xmlrpc 2014-01-21 17:24:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2014-0038.html