Bug 1098057 - Host status is shown up in RHSC UI, even when glusterd is stopped
Summary: Host status is shown up in RHSC UI, even when glusterd is stopped
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhsc
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.0.0
Assignee: Sahina Bose
QA Contact: Prasanth
URL:
Whiteboard:
Depends On: 1097736
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-05-15 07:47 UTC by Prasanth
Modified: 2015-05-15 17:43 UTC (History)
5 users (show)

Fixed In Version: rhsc-3.0.0-0.6.master.el6_5
Doc Type: Bug Fix
Doc Text:
Clone Of: 1097736
Environment:
Last Closed: 2014-09-22 19:09:44 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1277 0 normal SHIPPED_LIVE Red Hat Storage Console 3.0 enhancement and bug fix update 2014-09-22 23:06:30 UTC
oVirt gerrit 27838 0 None None None Never

Description Prasanth 2014-05-15 07:47:10 UTC
Cloning this bug for RHSC as the same issue is seen here as well.

Description of problem:
======================
RHSS Node status is shown as UP, eventhough glusterd is stopped actually from gluster cli

Version-Release number of selected component (if applicable):
=============================================================
RHEVM 3.4 (av9.1) [3.4.0-0.20.el6ev]
RHS 3.0 [ glusterfs-3.6.0-4.0.el6rhs ]

How reproducible:
=================
Consistent

Steps to Reproduce:
===================
1. Add RHSS 3.0 Node to gluster enabled cluster
2. Stop glusterd from RHSS node
3. Check the status of the Node RHEVM UI

Actual results:
==============
RHSS Node was shown up ( marked with green triangle ) 

Expected results:
=================
RHSS Node should be marked, "non-operational", as glusterd is no longer running in that node

--- Additional comment from SATHEESARAN on 2014-05-14 08:27:05 EDT ---

The same test case was working well with RHEV 3.3 + RHS 2.1 U2 ( corbett)
And verified the same with, https://bugzilla.redhat.com/show_bug.cgi?id=961247

Hence adding REGRESSION keyword to this bug

--- Additional comment from SATHEESARAN on 2014-05-14 09:25:53 EDT ---

Interesting observation,

1. I had 2 nodes in the gluster enabled cluster
2. When I stop glusterd in the first node in the cluster, list as shown in UI,
I see the UI reflects the status of the node as non-operational as expected
3. After starting glusterd, I could see the status was set to UP ( this is also works as expected )
4. I stopped glusterd in the other node, I see that the UI doesn't reflect the status, even waiting for long time

--- Additional comment from SATHEESARAN on 2014-05-14 09:27:17 EDT ---



--- Additional comment from SATHEESARAN on 2014-05-14 09:28:19 EDT ---



--- Additional comment from SATHEESARAN on 2014-05-14 09:32:22 EDT ---

Comment 1 Prasanth 2014-05-15 07:50:13 UTC
Version-Release number of selected component (if applicable):

rhsc-3.0.0-0.4.master.el6_5.noarch
glusterfs-3.6.0.1-1.el6rhs.x86_64

Following is seen in engine log:

----------
2014-05-15 13:13:33,453 ERROR [org.ovirt.engine.core.bll.gluster.tasks.GlusterTasksService] (DefaultQuartzScheduler_Worker-54) [61d3da66] org.ovirt.engine.core.common.error
s.VDSError@401088cf
2014-05-15 13:13:33,453 ERROR [org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob] (DefaultQuartzScheduler_Worker-54) [61d3da66] Error updating tasks from CLI: org.ovirt
.engine.core.common.errors.VdcBLLException: VdcBLLException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.
return code: 1 (Failed with error GlusterVolumeStatusAllFailedException and code 4161)
        at org.ovirt.engine.core.bll.gluster.tasks.GlusterTasksService.getTaskListForCluster(GlusterTasksService.java:43) [bll.jar:]
        at org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob.updateGlusterAsyncTasks(GlusterTasksSyncJob.java:84) [bll.jar:]
        at sun.reflect.GeneratedMethodAccessor65.invoke(Unknown Source) [:1.7.0_51]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_51]
        at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_51]
        at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60) [scheduler.jar:]
        at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz.jar:]
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) [quartz.jar:]
----------


Following is seen in vdsm log:

----------
Thread-16::ERROR::2014-05-15 13:01:08,720::BindingXMLRPC::1083::vds::(wrapper) vdsm exception occured
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 1070, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
    rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 306, in tasksList
    status = self.svdsmProxy.glusterTasksList(taskIds)
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
    **kwargs)
  File "<string>", line 2, in glusterTasksList
  File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod
    raise convert_to_error(kind, result)
GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.
return code: 1
Thread-16::DEBUG::2014-05-15 13:01:09,422::task::595::TaskManager.Task::(_updateState) Task=`b4ece6fc-d459-401b-a4f6-c2cb99bce1cd`::moving from state init -> state preparing
----------

Comment 3 Prasanth 2014-05-27 07:38:00 UTC
Veirifed in rhsc-3.0.0-0.6.master.el6_5

Comment 6 errata-xmlrpc 2014-09-22 19:09:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1277.html


Note You need to log in before you can comment on or make changes to this bug.