Bug 968178
Summary: | [RHEVM-RHS] Should check for gluster capabilities when moving host from virt to gluster cluster | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Sahina Bose <sabose> |
Component: | ovirt-engine-webadmin-portal | Assignee: | Sahina Bose <sabose> |
Status: | CLOSED ERRATA | QA Contact: | SATHEESARAN <sasundar> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.2.0 | CC: | acathrow, ecohen, hchiramm, iheim, jkt, Rhev-m-bugs, sabose, sasundar, scohen, sdharane |
Target Milestone: | --- | Flags: | scohen:
Triaged+
|
Target Release: | 3.3.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | gluster | ||
Fixed In Version: | is26 | Doc Type: | Bug Fix |
Doc Text: |
Previously when a host was moved from a Virt cluster to a Gluster cluster, there was no check for Gluster capabilities on the host. Now, during a cluster change the host is checked for Gluster capabilities, and if it fails the host is not activated. The VDSM dictionary is also updated, so service monitoring strategies are updated.
|
Story Points: | --- |
Clone Of: | Environment: |
virt rhev integration
|
|
Last Closed: | 2014-01-21 17:24:19 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Gluster | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1044030 |
Description
Sahina Bose
2013-05-29 07:34:45 UTC
*** Bug 965068 has been marked as a duplicate of this bug. *** Added code to execute Gluster peer list when activating a host, as per patch attached. This will ensure that host state is set to Non Operational if gluster is not installed. Tested with RHEVM IS25 [3.3.0-0.37.beta1.el6ev] and performed the following steps 1. Created a 3.2 compatible Datacenter 2. Created a 3.2 compatible gluster enabled cluster 3. Created a 3.2 compatible virt cluster 4. Added RHEL 6.5 Node to virt cluster 5. Once this node is up, moved it to MAINTENANCE state 6. Moved this node to gluster cluster 7. Brought up the node Result - The node is shown UP against expectation of showing as NON-OPERATIONAL. Based on this , moving this bug as ASSIGNED. Issue was that though host was moved to a gluster cluster, the Virt monitoring strategy was used. Posted a patch http://gerrit.ovirt.org/#/c/21855/ to fix this. Tested with RHEVM IS26 [3.3.0-0.38.rc.el6ev] and performed the following steps 1. Created a 3.2 compatible POSIXFS Datacenter 2. Created a 3.2 compatible gluster enabled cluster ( note that this cluster is empty and contains no previously added RHSS Nodes ) 3. Created a 3.2 compatible virt cluster 4. Added RHEL 6.5 Node to virt cluster 5. Once this node is up, moved it to MAINTENANCE state 6. Moved this node to gluster cluster 7. Brought up the node Repeated the above steps for 3.3 datacenter too. As expected, the node is shown as NON-OPERATIONAL in RHEVM UI as its trying to execute gluster command on that node (RHEL 6.5) In the case, where gluster cluster previously contains RHSS Nodes, then editing the virt host with gluster cluster itself throws error as follows : "Error while executing action ChangeVDSCluster: GlusterAddHostFailed " Concern here is wrt comment 1, expected result is that, the non-gluster host when its added to gluster cluster, should go NON-OPERATIONAL. But here are 2 outcomes based on whether the gluster-cluster already contains RHSS Nodes or not. 1. When the gluster cluster is empty, adding the non-gluster host makes it go to NON-OPERATIONAL state 2. When the gluster cluster is non-empty, adding the non-gluster host itself is denied with the error message, "Error while executing action ChangeVDSCluster: GlusterAddHostFailed" Is that the expected behavior ? I could move it to VERIFIED state, once I get required info Sas, Yes - when host is added to an empty cluster - we only check that gluster is installed on host. When host is added to a cluster with existing hosts, we try to peer probe the newly added host - and this is when it fails to add. This is expected behaviour. Thanks Sahina for your comment6 Marking this bug as VERIFIED with verification details available in comment5 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2014-0038.html |