Bug 1097736
| Summary: | [RHEVM-RHS] Host status is shown up in RHEVM UI, even when glusterd is stopped | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | SATHEESARAN <sasundar> | ||||||||
| Component: | ovirt-engine-webadmin-portal | Assignee: | Sahina Bose <sabose> | ||||||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | SATHEESARAN <sasundar> | ||||||||
| Severity: | high | Docs Contact: | |||||||||
| Priority: | unspecified | ||||||||||
| Version: | 3.4.0 | CC: | acathrow, ecohen, eedri, gklein, iheim, obasan, rcyriac, Rhev-m-bugs, rhsc-qe-bugs, sabose, scohen, sherold, ssamanta, yeylon | ||||||||
| Target Milestone: | --- | Keywords: | Regression | ||||||||
| Target Release: | 3.4.0 | Flags: | scohen:
needinfo+
|
||||||||
| Hardware: | x86_64 | ||||||||||
| OS: | Linux | ||||||||||
| Whiteboard: | gluster | ||||||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||||||
| Doc Text: | Story Points: | --- | |||||||||
| Clone Of: | |||||||||||
| : | 1098057 (view as bug list) | Environment: | |||||||||
| Last Closed: | Type: | Bug | |||||||||
| Regression: | --- | Mount Type: | --- | ||||||||
| Documentation: | --- | CRM: | |||||||||
| Verified Versions: | Category: | --- | |||||||||
| oVirt Team: | Gluster | RHEL 7.3 requirements from Atomic Host: | |||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||
| Embargoed: | |||||||||||
| Bug Depends On: | |||||||||||
| Bug Blocks: | 1098057, 1108871 | ||||||||||
| Attachments: |
|
||||||||||
|
Description
SATHEESARAN
2014-05-14 12:25:15 UTC
The same test case was working well with RHEV 3.3 + RHS 2.1 U2 ( corbett) And verified the same with, https://bugzilla.redhat.com/show_bug.cgi?id=961247 Hence adding REGRESSION keyword to this bug Interesting observation, 1. I had 2 nodes in the gluster enabled cluster 2. When I stop glusterd in the first node in the cluster, list as shown in UI, I see the UI reflects the status of the node as non-operational as expected 3. After starting glusterd, I could see the status was set to UP ( this is also works as expected ) 4. I stopped glusterd in the other node, I see that the UI doesn't reflect the status, even waiting for long time Created attachment 895496 [details]
sosreport from RHSS Node1
Created attachment 895497 [details]
sosreport from RHSS Node2
Created attachment 895498 [details]
sosreport from engine
The issue was due to the change done to issue gluster peer status command on the first available UP server. This change required that in addition to the list of servers returned, the status of the host as returned by glusterd is used to move a host to Non-Operational state. The patch was missed in the 3.4 branch Verified this issue with RHEVM 3.4 [ av9.2 ] (3.4.0-0.21.el6ev) and glusterfs-3.6.0.5-1.el6rhs Performed the following to verify this bug, 1. Created a DC ( 3.4 ) 2. Created a gluster enabled cluster (3.4) 3. Added 2 RHS Nodes to cluster one after the other to the cluster 4. Stopped glusterd on the second RHS Node( from gluster CLI ) 5. RHEVM UI showed that particular turned non-operational 6. Repeated the test with 4 nodes in the cluster and alternatively stopped glusterd and status was reflecting accordingly. Moving this bug as VERIFIED Closing as part of 3.4.0 |