Bug 961247 - [RHEVM-RHS] Host status is shown up in UI, when glusterd is stopped
[RHEVM-RHS] Host status is shown up in UI, when glusterd is stopped
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine-webadmin-portal (Show other bugs)
3.2.0
x86_64 Linux
unspecified Severity medium
: ---
: 3.3.0
Assigned To: Sahina Bose
SATHEESARAN
gluster
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-09 04:42 EDT by SATHEESARAN
Modified: 2016-02-10 13:58 EST (History)
10 users (show)

See Also:
Fixed In Version: is23.1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
virt rhev integration
Last Closed: 2014-01-21 17:13:00 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Gluster
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
scohen: Triaged+


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 16898 None None None Never
oVirt gerrit 21081 None None None Never

  None (edit)
Description SATHEESARAN 2013-05-09 04:42:47 EDT
Description of problem:
Host status in gluster cluster is shown up, even when glusterd is not running.

Version-Release number of selected component (if applicable):

RHS 2.1 - glusterfs-3.4.0.3rhs-1.el6rhs.x86_64 []

vdsm-python-4.10.2-17.0.el6ev.x86_64
vdsm-cli-4.10.2-17.0.el6ev.noarch
vdsm-xmlrpc-4.10.2-17.0.el6ev.noarch

How reproducible:
Always

Steps to Reproduce:
1. Add a host to gluster cluster in RHEVM UI
2. After bootstrapping of the node, try stopping glusterd in that node
3. Check for the status of the node in RHEVM UI
  
Actual results:
The node is shown UP even when glusterd is not running

Expected results:
The node should be marked relevantly when glusterd is not running.
Atleast events or notification should inform that glusterd is not running

Additional info:
Comment 1 SATHEESARAN 2013-05-09 06:21:22 EDT
The consequence of this bug, is that RHEVM UI allows to add more than 1 host to  gluster-cluster, even when glusterd is not running in both of them.[ here, UI shows both nodes are up ]

Ultimately, there is a cluster contains RHS Nodes, which are not in cluster in reality, as glusterd is not operational.

But after sometime, if glusterd comes up in one node, that automatically removes other from cluster
Comment 2 Sahina Bose 2013-11-21 06:03:26 EST
The periodic polling now runs a gluster peer command, to ensure glusterd is running on the node
Comment 3 SATHEESARAN 2013-11-21 06:36:45 EST
(In reply to Sahina Bose from comment #2)
> The periodic polling now runs a gluster peer command, to ensure glusterd is
> running on the node

Sahina,

With RHEVM 3.3.0-0.34.beta1.el6ev and glusterfs-3.4.0.44rhs-1 - 
I did the following,
1. In a 3.2 Compatibility Datacenter, created a 3.2 compatible cluster
2. Using RHEVM UI, added 2 RHSS nodes to the above created cluster.
3. Using RHEVM UI, created distribute-replicate volume(2X2) and started it.
4. From one of the RHSS Node (gluster cli), stopped glusterd 
(i.e) service glusterd stop

Observation
===========
1. From RHEVM UI, I could observe that the soon after glusterd was stopped, RHSS Node was moved to "NON-OPERATIONAL" ( it took less than 10 secs, in all three attempts )
2. But, When I started glusterd on that Nodes (service glusterd start), it took 2 minutes to show back the node as in UP state in RHEVM UI

If the polling is periodic, why there is a delay in showing the node as "UP" in rhevm UI, once glusterd is UP ? Is this expected one ?
This doesn't happen when glusterd was down.(i.e) Time take to show the NODE as non-operational, after stopping glusterd is <~10secs
Comment 4 Sahina Bose 2013-11-21 06:39:06 EST
For Non-Operational hosts, an auto recovery is tried every 5 minutes (by default) to try and activate the host. This is when you see the host going back to UP state.
Comment 5 SATHEESARAN 2013-11-21 06:48:37 EST
(In reply to Sahina Bose from comment #4)
> For Non-Operational hosts, an auto recovery is tried every 5 minutes (by
> default) to try and activate the host. This is when you see the host going
> back to UP state.

Sahina, 
Thanks for the quick response !!

With Verification steps described in Comment3, moving this bug to VERIFIED.
Comment 6 Itamar Heim 2014-01-21 17:13:00 EST
Closing - RHEV 3.3 Released
Comment 7 Itamar Heim 2014-01-21 17:21:02 EST
Closing - RHEV 3.3 Released

Note You need to log in before you can comment on or make changes to this bug.