Description of problem: When host is moved from a virt cluster to a gluster cluster, there is no check for gluster capabilities on host. Version-Release number of selected component (if applicable): RHEV-M 3.2 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: The host state is UP if moved from a virt cluster to an empty gluster cluster/ Expected results: When a RHEV-H node is moved from a virt cluster to a gluster cluster, it should be non-operational as the node has no required packages. Additional info: Currently, if the host is moved to a gluster cluster with other hosts, the status goes to Non-Operational as the gluster peer probe fails on the newly added RHEV-H host. But gluster peer probe is not performed if being added to an empty cluster.
*** Bug 965068 has been marked as a duplicate of this bug. ***
Added code to execute Gluster peer list when activating a host, as per patch attached. This will ensure that host state is set to Non Operational if gluster is not installed.
Tested with RHEVM IS25 [3.3.0-0.37.beta1.el6ev] and performed the following steps 1. Created a 3.2 compatible Datacenter 2. Created a 3.2 compatible gluster enabled cluster 3. Created a 3.2 compatible virt cluster 4. Added RHEL 6.5 Node to virt cluster 5. Once this node is up, moved it to MAINTENANCE state 6. Moved this node to gluster cluster 7. Brought up the node Result - The node is shown UP against expectation of showing as NON-OPERATIONAL. Based on this , moving this bug as ASSIGNED.
Issue was that though host was moved to a gluster cluster, the Virt monitoring strategy was used. Posted a patch http://gerrit.ovirt.org/#/c/21855/ to fix this.
Tested with RHEVM IS26 [3.3.0-0.38.rc.el6ev] and performed the following steps 1. Created a 3.2 compatible POSIXFS Datacenter 2. Created a 3.2 compatible gluster enabled cluster ( note that this cluster is empty and contains no previously added RHSS Nodes ) 3. Created a 3.2 compatible virt cluster 4. Added RHEL 6.5 Node to virt cluster 5. Once this node is up, moved it to MAINTENANCE state 6. Moved this node to gluster cluster 7. Brought up the node Repeated the above steps for 3.3 datacenter too. As expected, the node is shown as NON-OPERATIONAL in RHEVM UI as its trying to execute gluster command on that node (RHEL 6.5) In the case, where gluster cluster previously contains RHSS Nodes, then editing the virt host with gluster cluster itself throws error as follows : "Error while executing action ChangeVDSCluster: GlusterAddHostFailed " Concern here is wrt comment 1, expected result is that, the non-gluster host when its added to gluster cluster, should go NON-OPERATIONAL. But here are 2 outcomes based on whether the gluster-cluster already contains RHSS Nodes or not. 1. When the gluster cluster is empty, adding the non-gluster host makes it go to NON-OPERATIONAL state 2. When the gluster cluster is non-empty, adding the non-gluster host itself is denied with the error message, "Error while executing action ChangeVDSCluster: GlusterAddHostFailed" Is that the expected behavior ? I could move it to VERIFIED state, once I get required info
Sas, Yes - when host is added to an empty cluster - we only check that gluster is installed on host. When host is added to a cluster with existing hosts, we try to peer probe the newly added host - and this is when it fails to add. This is expected behaviour.
Thanks Sahina for your comment6 Marking this bug as VERIFIED with verification details available in comment5
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2014-0038.html