Bug 1098057
| Summary: | Host status is shown up in RHSC UI, even when glusterd is stopped | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Prasanth <pprakash> |
| Component: | rhsc | Assignee: | Sahina Bose <sabose> |
| Status: | CLOSED ERRATA | QA Contact: | Prasanth <pprakash> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | rhgs-3.0 | CC: | dpati, kmayilsa, nlevinki, rhs-bugs, rhsc-qe-bugs |
| Target Milestone: | --- | Keywords: | Regression |
| Target Release: | RHGS 3.0.0 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | rhsc-3.0.0-0.6.master.el6_5 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1097736 | Environment: | |
| Last Closed: | 2014-09-22 19:09:44 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1097736 | ||
| Bug Blocks: | |||
|
Description
Prasanth
2014-05-15 07:47:10 UTC
Version-Release number of selected component (if applicable):
rhsc-3.0.0-0.4.master.el6_5.noarch
glusterfs-3.6.0.1-1.el6rhs.x86_64
Following is seen in engine log:
----------
2014-05-15 13:13:33,453 ERROR [org.ovirt.engine.core.bll.gluster.tasks.GlusterTasksService] (DefaultQuartzScheduler_Worker-54) [61d3da66] org.ovirt.engine.core.common.error
s.VDSError@401088cf
2014-05-15 13:13:33,453 ERROR [org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob] (DefaultQuartzScheduler_Worker-54) [61d3da66] Error updating tasks from CLI: org.ovirt
.engine.core.common.errors.VdcBLLException: VdcBLLException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.
return code: 1 (Failed with error GlusterVolumeStatusAllFailedException and code 4161)
at org.ovirt.engine.core.bll.gluster.tasks.GlusterTasksService.getTaskListForCluster(GlusterTasksService.java:43) [bll.jar:]
at org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob.updateGlusterAsyncTasks(GlusterTasksSyncJob.java:84) [bll.jar:]
at sun.reflect.GeneratedMethodAccessor65.invoke(Unknown Source) [:1.7.0_51]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_51]
at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_51]
at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60) [scheduler.jar:]
at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz.jar:]
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) [quartz.jar:]
----------
Following is seen in vdsm log:
----------
Thread-16::ERROR::2014-05-15 13:01:08,720::BindingXMLRPC::1083::vds::(wrapper) vdsm exception occured
Traceback (most recent call last):
File "/usr/share/vdsm/BindingXMLRPC.py", line 1070, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
rv = func(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 306, in tasksList
status = self.svdsmProxy.glusterTasksList(taskIds)
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in glusterTasksList
File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod
raise convert_to_error(kind, result)
GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.
return code: 1
Thread-16::DEBUG::2014-05-15 13:01:09,422::task::595::TaskManager.Task::(_updateState) Task=`b4ece6fc-d459-401b-a4f6-c2cb99bce1cd`::moving from state init -> state preparing
----------
Veirifed in rhsc-3.0.0-0.6.master.el6_5 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-1277.html |