Bug 1261689
Summary: | geo-replication faulty | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | ikhan.kim <ihkim> | |
Component: | geo-replication | Assignee: | Kotresh HR <khiremat> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | mainline | CC: | bugs, khiremat, khoj, sarumuga | |
Target Milestone: | --- | Keywords: | Triaged | |
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.11.0 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1435587 (view as bug list) | Environment: | ||
Last Closed: | 2017-05-30 18:32:08 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1435587 |
Description
ikhan.kim
2015-09-10 00:45:29 UTC
Hi, Venky. the configuration is as below.. -------------------------------------------------------------- mount volume server1: CN1-PRD-FS01 CN2-PRD-FS01 ==> replicated volume0 | | distributed distributed | | server2: CN1-PRD-FS02 CN2-PRD-FS02 ==> replicated volume1 | | | | georeplicate cndr we restart the geo-replication and the problem is solved. but we want to know which cause fall into the faulty state. Thank for advance... we met same error again. It fell into faulty status,,,we restart by force option.. please let me know the reason, asap...Thks.. GlusterFS-3.6 is nearing its End-Of-Life, only important security bugs still make a chance on getting fixed. Moving this to the mainline 'version'. If this needs to get fixed in 3.7 or 3.8 this bug should get cloned. REVIEW: https://review.gluster.org/16997 (geo-rep: Improve worker log messages) posted (#2) for review on master by Kotresh HR (khiremat) REVIEW: https://review.gluster.org/16997 (geo-rep: Improve worker log messages) posted (#3) for review on master by Kotresh HR (khiremat) REVIEW: https://review.gluster.org/16997 (geo-rep: Improve worker log messages) posted (#4) for review on master by Kotresh HR (khiremat) COMMIT: https://review.gluster.org/16997 committed in master by Aravinda VK (avishwan) ------ commit e01025973c73e2bd0eda8cfed22b75617305d740 Author: Kotresh HR <khiremat> Date: Tue Apr 4 15:39:46 2017 -0400 geo-rep: Improve worker log messages Monitor process expects worker to establish SSH Tunnel to slave node and mount master volume locally with in 60 secs and acknowledge monitor process by closing feedback fd. If something goes wrong and worker does not close feedback fd with in 60 secs, monitor kills the worker. But there was no clue in log message about the actual issue. This patch adds log and indicates whether the worker is hung during SSH or master mount. Change-Id: Id08a12fa6f3bba1d4fe8036728dbc290e6c14c8c BUG: 1261689 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: https://review.gluster.org/16997 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Aravinda VK <avishwan> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/ |