Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1097736

Summary: [RHEVM-RHS] Host status is shown up in RHEVM UI, even when glusterd is stopped
Product: Red Hat Enterprise Virtualization Manager Reporter: SATHEESARAN <sasundar>
Component: ovirt-engine-webadmin-portalAssignee: Sahina Bose <sabose>
Status: CLOSED CURRENTRELEASE QA Contact: SATHEESARAN <sasundar>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.4.0CC: acathrow, ecohen, eedri, gklein, iheim, obasan, rcyriac, Rhev-m-bugs, rhsc-qe-bugs, sabose, scohen, sherold, ssamanta, yeylon
Target Milestone: ---Keywords: Regression
Target Release: 3.4.0Flags: scohen: needinfo+
Hardware: x86_64   
OS: Linux   
Whiteboard: gluster
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1098057 (view as bug list) Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Gluster RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1098057, 1108871    
Attachments:
Description Flags
sosreport from RHSS Node1
none
sosreport from RHSS Node2
none
sosreport from engine none

Description SATHEESARAN 2014-05-14 12:25:15 UTC
Description of problem:
======================
RHSS Node status is shown as UP, eventhough glusterd is stopped actually from gluster cli

Version-Release number of selected component (if applicable):
=============================================================
RHEVM 3.4 (av9.1) [3.4.0-0.20.el6ev]
RHS 3.0 [ glusterfs-3.6.0-4.0.el6rhs ]

How reproducible:
=================
Consistent

Steps to Reproduce:
===================
1. Add RHSS 3.0 Node to gluster enabled cluster
2. Stop glusterd from RHSS node
3. Check the status of the Node RHEVM UI

Actual results:
==============
RHSS Node was shown up ( marked with green triangle ) 

Expected results:
=================
RHSS Node should be marked, "non-operational", as glusterd is no longer running in that node

Comment 1 SATHEESARAN 2014-05-14 12:27:05 UTC
The same test case was working well with RHEV 3.3 + RHS 2.1 U2 ( corbett)
And verified the same with, https://bugzilla.redhat.com/show_bug.cgi?id=961247

Hence adding REGRESSION keyword to this bug

Comment 2 SATHEESARAN 2014-05-14 13:25:53 UTC
Interesting observation,

1. I had 2 nodes in the gluster enabled cluster
2. When I stop glusterd in the first node in the cluster, list as shown in UI,
I see the UI reflects the status of the node as non-operational as expected
3. After starting glusterd, I could see the status was set to UP ( this is also works as expected )
4. I stopped glusterd in the other node, I see that the UI doesn't reflect the status, even waiting for long time

Comment 3 SATHEESARAN 2014-05-14 13:27:17 UTC
Created attachment 895496 [details]
sosreport from RHSS Node1

Comment 4 SATHEESARAN 2014-05-14 13:28:19 UTC
Created attachment 895497 [details]
sosreport from RHSS Node2

Comment 5 SATHEESARAN 2014-05-14 13:32:22 UTC
Created attachment 895498 [details]
sosreport from engine

Comment 6 Sahina Bose 2014-05-19 09:11:40 UTC
The issue was due to the change done to issue gluster peer status command on the first available UP server. This change required that in addition to the list of servers returned, the status of the host as returned by glusterd is used to move a host to Non-Operational state. 

The patch was missed in the 3.4 branch

Comment 8 SATHEESARAN 2014-05-22 14:43:55 UTC
Verified this issue with RHEVM 3.4 [ av9.2 ] (3.4.0-0.21.el6ev) and glusterfs-3.6.0.5-1.el6rhs

Performed the following to verify this bug,
1. Created a DC ( 3.4 )
2. Created a gluster enabled cluster (3.4)
3. Added 2 RHS Nodes to cluster one after the other to the cluster
4. Stopped glusterd on the second RHS Node( from gluster CLI )
5. RHEVM UI showed that particular turned non-operational
6. Repeated the test with 4 nodes in the cluster and alternatively stopped glusterd and status was reflecting accordingly.

Moving this bug as VERIFIED

Comment 9 Itamar Heim 2014-06-12 14:11:53 UTC
Closing as part of 3.4.0