Bug 1610743
Summary: | Directory is incorrectly reported as in split-brain when dirty marking is there | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Vijay Avuthu <vavuthu> | ||||
Component: | replicate | Assignee: | Ravishankar N <ravishankar> | ||||
Status: | CLOSED ERRATA | QA Contact: | Vijay Avuthu <vavuthu> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | rhgs-3.4 | CC: | anepatel, apaladug, chpai, ravishankar, rhs-bugs, sanandpa, sankarshan, sheggodu, storage-qa-internal, vdas | ||||
Target Milestone: | --- | Keywords: | ZStream | ||||
Target Release: | RHGS 3.4.z Batch Update 1 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | glusterfs-3.12.2-20 | Doc Type: | Bug Fix | ||||
Doc Text: |
Previously, when directories had dirty markers set on them due to afr transaction failures or when replace brick/reset brick was performed, heal-info reporting considered them to be in split-brain state. With this fix, heal-info does not consider the presence of dirty markers as an indication of split-brain and does not display these entries to be in split-brain state.
|
Story Points: | --- | ||||
Clone Of: | Environment: | ||||||
Last Closed: | 2018-10-31 08:46:14 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Vijay Avuthu
2018-08-01 11:19:35 UTC
Attempted an upstream fix via https://review.gluster.org/21135 (BZ 1626994). Verified the fix, see below. Build used glusterfs-3.12.2-21.el7rhgs.x86_64 At Step 5 from Bug Description, No split-brain is reported by heal info # gluster vol heal replicate_bug info Brick 10.70.47.133:/bricks/brick3/day4 Status: Connected Number of entries: 0 Brick 10.70.46.168:/bricks/brick3/day4 /dir1/300mbfile /dir1 Status: Connected Number of entries: 2 Brick 10.70.47.102:/bricks/brick3/day4 Status: Connected Number of entries: 0 Also we can see dirty bit at step 6, which is as expected. # getfattr -d -m . -e hex /bricks/brick3/day4/dir1 getfattr: Removing leading '/' from absolute path names # file: bricks/brick3/day4/dir1 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.dirty=0x000000000000000000000001 trusted.gfid=0xbd6c8c8584b7476c9eba9c8d128e5765 trusted.glusterfs.dht=0x000000010000000000000000ffffffff trusted.glusterfs.dht.mds=0x00000000 getfattr -d -m . -e hex /bricks/brick3/day4/dir1/300mbfile getfattr: Removing leading '/' from absolute path names # file: bricks/brick3/day4/dir1/300mbfile security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.replicate_bug-client-0=0x000000010000000100000000 trusted.afr.replicate_bug-client-2=0x000000010000000100000000 trusted.gfid=0x1c851d3ae1df4abb93f204761a156d03 trusted.gfid2path.a63e0b78c611e2f2=0x62643663386338352d383462372d343736632d396562612d3963386431323865353736352f3330306d6266696c65 Moving it to verified Looks good to me. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:3432 |