Bug 1203506
Summary: | Bad Volume Specification | Connection Time Out | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | punit <hypunit> |
Component: | glusterd | Assignee: | punit <hypunit> |
Status: | CLOSED NOTABUG | QA Contact: | |
Severity: | urgent | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.6.1 | CC: | bugs, ggarg, hypunit, rkavunga |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-02-23 12:33:07 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
punit
2015-03-19 02:42:03 UTC
I have found some disconnection errors in the bricks logs... Hi All, With the help of gluster community and ovirt-china community...my issue got resolved... The main root cause was the following :- 1. the glob operation takes quite a long time, longer than the ioprocess default 60s.. 2. python-ioprocess updated which makes a single change of configuration file doesn't work properly, only because this we should hack the code manually... Solution (Need to do on all the hosts) :- 1. Add the the ioprocess timeout value in the /etc/vdsm/vdsm.conf file as :- ------------ [irs] process_pool_timeout = 180 ------------- 2. Check /usr/share/vdsm/storage/outOfProcess.py, line 71 and see whether there is still "IOProcess(DEFAULT_TIMEOUT)" in it,if yes...then changing the configuration file takes no effect because now timeout is the third parameter not the second of IOProcess.__init__(). 3. Change IOProcess(DEFAULT_TIMEOUT) to IOProcess(timeout=DEFAULT_TIMEOUT) and remove the /usr/share/vdsm/storage/outOfProcess.pyc file and restart vdsm and supervdsm service on all hosts.... Thanks, Thanks punit for updating the bug. Can you please close the bug ? Closing this bug as per reporter #c2 problem have solved. feel free to re-open this bug if issue exist. |