Bug 1319406
Summary: | gluster volume heal info shows conservative merge entries as in split-brain | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Pranith Kumar K <pkarampu> | |
Component: | replicate | Assignee: | Pranith Kumar K <pkarampu> | |
Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> | |
Severity: | unspecified | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.1 | CC: | asrivast, olim, pkarampu, rhinduja, rhs-bugs, storage-qa-internal | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 3.1.3 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
URL: | 1319406 | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.7.9-2 | Doc Type: | Bug Fix | |
Doc Text: |
When directory operations failed with errors other than the brick being offline, the parent directory that contained entries that failed was shown as being in a split-brain state even when it was not. This has been corrected so that state is shown correctly in this situation.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1322253 (view as bug list) | Environment: | ||
Last Closed: | 2016-06-23 05:03:59 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1311817, 1322253, 1326212 |
Description
Pranith Kumar K
2016-03-19 17:04:48 UTC
QATP: ==== 1) have a afr volume 2) mount the volume and create some dirs and files in them 3)now bring down one of the replica bricks 4)now from mount change the permissions and ownership of some dirs and their files 5)disable self heal deamon and all the client side heal options(data,metadata,entry so as to avoid client side healing) 6)now bring back the brick online 7)keep monitoring the heal info and heal info split-brain in a loop till test case is complete 8)now enable the heal and start a heal of the volume 9)now check the healing the heal must be completed, and the new filepermissions and ownership must be updated in the sink brick which can be checked in the backend brick Also, no split brains errors must be seen all heals must pass successfully Validation: ========= got the above case automated and ran it both manually and automated the case passed. hence moving to verified [root@dhcp35-191 ~]# rpm -qa|grep gluste glusterfs-cli-3.7.9-6.el7rhgs.x86_64 glusterfs-libs-3.7.9-6.el7rhgs.x86_64 glusterfs-fuse-3.7.9-6.el7rhgs.x86_64 glusterfs-client-xlators-3.7.9-6.el7rhgs.x86_64 glusterfs-server-3.7.9-6.el7rhgs.x86_64 python-gluster-3.7.9-5.el7rhgs.noarch glusterfs-3.7.9-6.el7rhgs.x86_64 glusterfs-api-3.7.9-6.el7rhgs.x86_64 When directory operations failed with errors other than the brick being offline, *Parent directory containing these* entries that failed were shown as being in a split-brain state even when they were not. This has been corrected so that state is shown correctly in this situation. Added the correction in '*' Looks good to me. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240 |