Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 968178 - [RHEVM-RHS] Should check for gluster capabilities when moving host from virt to gluster cluster
[RHEVM-RHS] Should check for gluster capabilities when moving host from virt ...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine-webadmin-portal (Show other bugs)
3.2.0
Unspecified Unspecified
unspecified Severity unspecified
: ---
: 3.3.0
Assigned To: Sahina Bose
SATHEESARAN
gluster
:
: 965068 (view as bug list)
Depends On:
Blocks: 3.3snap4
  Show dependency treegraph
 
Reported: 2013-05-29 03:34 EDT by Sahina Bose
Modified: 2016-02-10 13:58 EST (History)
10 users (show)

See Also:
Fixed In Version: is26
Doc Type: Bug Fix
Doc Text:
Previously when a host was moved from a Virt cluster to a Gluster cluster, there was no check for Gluster capabilities on the host. Now, during a cluster change the host is checked for Gluster capabilities, and if it fails the host is not activated. The VDSM dictionary is also updated, so service monitoring strategies are updated.
Story Points: ---
Clone Of:
Environment:
virt rhev integration
Last Closed: 2014-01-21 12:24:19 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Gluster
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
scohen: Triaged+


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 16898 None None None Never
oVirt gerrit 21081 None None None Never
oVirt gerrit 21855 None None None Never
oVirt gerrit 21905 None None None Never
Red Hat Product Errata RHSA-2014:0038 normal SHIPPED_LIVE Important: Red Hat Enterprise Virtualization Manager 3.3.0 update 2014-01-21 17:03:06 EST

  None (edit)
Description Sahina Bose 2013-05-29 03:34:45 EDT
Description of problem:
When host is moved from a virt cluster to a gluster cluster, there is no check for gluster capabilities on host.

Version-Release number of selected component (if applicable):
RHEV-M 3.2

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:

The host state is UP if moved from a virt cluster to an empty gluster cluster/


Expected results:

When a RHEV-H node is moved from a virt cluster to a gluster cluster, it should be non-operational as the node has no required packages.

Additional info:
Currently, if the host is moved to a gluster cluster with other hosts, the status goes to Non-Operational as the gluster peer probe fails on the newly added RHEV-H host.
But gluster peer probe is not performed if being added to an empty cluster.
Comment 1 Itamar Heim 2013-05-29 05:43:33 EDT
*** Bug 965068 has been marked as a duplicate of this bug. ***
Comment 2 Sahina Bose 2013-08-26 06:18:10 EDT
Added code to execute Gluster peer list when activating a host, as per patch attached.
This will ensure that host state is set to Non Operational if gluster is not installed.
Comment 3 SATHEESARAN 2013-11-27 10:15:07 EST
Tested with RHEVM IS25 [3.3.0-0.37.beta1.el6ev] and performed the following steps

1. Created a 3.2 compatible Datacenter
2. Created a 3.2 compatible gluster enabled cluster
3. Created a 3.2 compatible virt cluster
4. Added RHEL 6.5 Node to virt cluster
5. Once this node is up, moved it to MAINTENANCE state
6. Moved this node to gluster cluster
7. Brought up the node

Result - The node is shown UP against expectation of showing as NON-OPERATIONAL.

Based on this , moving this bug as ASSIGNED.
Comment 4 Sahina Bose 2013-11-29 04:09:32 EST
Issue was that though host was moved to a gluster cluster, the Virt monitoring strategy was used. Posted a patch http://gerrit.ovirt.org/#/c/21855/ to fix this.
Comment 5 SATHEESARAN 2013-12-12 11:07:06 EST
Tested with RHEVM IS26 [3.3.0-0.38.rc.el6ev] and performed the following steps

1. Created a 3.2 compatible POSIXFS Datacenter
2. Created a 3.2 compatible gluster enabled cluster ( note that this cluster is empty and contains no previously added RHSS Nodes )
3. Created a 3.2 compatible virt cluster
4. Added RHEL 6.5 Node to virt cluster
5. Once this node is up, moved it to MAINTENANCE state
6. Moved this node to gluster cluster
7. Brought up the node

Repeated the above steps for 3.3 datacenter too.

As expected, the node is shown as NON-OPERATIONAL in RHEVM UI as its trying to execute gluster command on that node (RHEL 6.5)

In the case, where gluster cluster previously contains RHSS Nodes, then editing the virt host with gluster cluster itself throws error as follows :
"Error while executing action ChangeVDSCluster: GlusterAddHostFailed "


Concern here is wrt comment 1, expected result is that, the non-gluster host   when its added to gluster cluster, should go NON-OPERATIONAL.

But here are 2 outcomes based on whether the gluster-cluster already contains RHSS Nodes or not.

1. When the gluster cluster is empty, adding the non-gluster host makes it go to NON-OPERATIONAL state
2. When the gluster cluster is non-empty, adding the non-gluster host itself is denied with the error message, "Error while executing action ChangeVDSCluster: GlusterAddHostFailed"

Is that the expected behavior ?
I could move it to VERIFIED state, once I get required info
Comment 6 Sahina Bose 2013-12-13 06:05:37 EST
Sas,

Yes - when host is added to an empty cluster - we only check that gluster is installed on host.

When host is added to a cluster with existing hosts, we try to peer probe the newly added host - and this is when it fails to add.

This is expected behaviour.
Comment 7 SATHEESARAN 2013-12-13 06:47:25 EST
Thanks Sahina for your comment6

Marking this bug as VERIFIED with verification details available in comment5
Comment 8 errata-xmlrpc 2014-01-21 12:24:19 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2014-0038.html

Note You need to log in before you can comment on or make changes to this bug.