Bug 961247
Summary: | [RHEVM-RHS] Host status is shown up in UI, when glusterd is stopped | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | SATHEESARAN <sasundar> |
Component: | ovirt-engine-webadmin-portal | Assignee: | Sahina Bose <sabose> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | SATHEESARAN <sasundar> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.2.0 | CC: | acathrow, ecohen, grajaiya, iheim, jkt, Rhev-m-bugs, rhs-bugs, sabose, scohen, shtripat |
Target Milestone: | --- | Flags: | scohen:
Triaged+
|
Target Release: | 3.3.0 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | gluster | ||
Fixed In Version: | is23.1 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: |
virt rhev integration
|
|
Last Closed: | 2014-01-21 22:13:00 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Gluster | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
SATHEESARAN
2013-05-09 08:42:47 UTC
The consequence of this bug, is that RHEVM UI allows to add more than 1 host to gluster-cluster, even when glusterd is not running in both of them.[ here, UI shows both nodes are up ] Ultimately, there is a cluster contains RHS Nodes, which are not in cluster in reality, as glusterd is not operational. But after sometime, if glusterd comes up in one node, that automatically removes other from cluster The periodic polling now runs a gluster peer command, to ensure glusterd is running on the node (In reply to Sahina Bose from comment #2) > The periodic polling now runs a gluster peer command, to ensure glusterd is > running on the node Sahina, With RHEVM 3.3.0-0.34.beta1.el6ev and glusterfs-3.4.0.44rhs-1 - I did the following, 1. In a 3.2 Compatibility Datacenter, created a 3.2 compatible cluster 2. Using RHEVM UI, added 2 RHSS nodes to the above created cluster. 3. Using RHEVM UI, created distribute-replicate volume(2X2) and started it. 4. From one of the RHSS Node (gluster cli), stopped glusterd (i.e) service glusterd stop Observation =========== 1. From RHEVM UI, I could observe that the soon after glusterd was stopped, RHSS Node was moved to "NON-OPERATIONAL" ( it took less than 10 secs, in all three attempts ) 2. But, When I started glusterd on that Nodes (service glusterd start), it took 2 minutes to show back the node as in UP state in RHEVM UI If the polling is periodic, why there is a delay in showing the node as "UP" in rhevm UI, once glusterd is UP ? Is this expected one ? This doesn't happen when glusterd was down.(i.e) Time take to show the NODE as non-operational, after stopping glusterd is <~10secs For Non-Operational hosts, an auto recovery is tried every 5 minutes (by default) to try and activate the host. This is when you see the host going back to UP state. (In reply to Sahina Bose from comment #4) > For Non-Operational hosts, an auto recovery is tried every 5 minutes (by > default) to try and activate the host. This is when you see the host going > back to UP state. Sahina, Thanks for the quick response !! With Verification steps described in Comment3, moving this bug to VERIFIED. Closing - RHEV 3.3 Released Closing - RHEV 3.3 Released |