Bug 1004716
Summary: | Dist-geo-rep : If a slave node goes down, the session connecting to that node doesn't go to faulty immediately and also doesn't sync files to slave. | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Vijaykumar Koppad <vkoppad> |
Component: | geo-replication | Assignee: | Bug Updates Notification Mailing List <rhs-bugs> |
Status: | CLOSED EOL | QA Contact: | storage-qa-internal <storage-qa-internal> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 2.1 | CC: | avishwan, chrisw, csaba, david.macdonald, nsathyan, rhs-bugs, rwheeler, vagarwal |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-11-25 08:49:54 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Vijaykumar Koppad
2013-09-05 10:25:18 UTC
Earlier distribution of aux-mount on slave cluster was not there. We had a BUG 980049 for the same. This distribution on slave side was introduced in glusterfs-3.4.0.30rhs-2.el6rhs.x86_64. Need a minor change (but significant meaning) in summary of the bug. It doesn't fail to sync to slave forever. It fails to sync to slave for the period the slave machine is down. Once it comes back up, it works fine. Targeting for 3.0.0 (Denali) release. Dev ack to 3.0 RHS BZs Closing this bug since RHGS 2.1 release reached EOL. Required bugs are cloned to RHGS 3.1. Please re-open this issue if found again. Closing this bug since RHGS 2.1 release reached EOL. Required bugs are cloned to RHGS 3.1. Please re-open this issue if found again. |