Bug 1004755 - afr: self heal completed but the entries found on the indices directory of sync, change logs on entries are all zeros
Summary: afr: self heal completed but the entries found on the indices directory of sy...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: 2.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Ravishankar N
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-09-05 12:12 UTC by Rahul Hinduja
Modified: 2016-09-17 12:12 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-03 17:13:49 UTC
Embargoed:


Attachments (Terms of Use)

Description Rahul Hinduja 2013-09-05 12:12:29 UTC
Description of problem:
======================

In a following scenario where: 
While the IO's were inprogress , the bricks were umounted (unmount -l) from each replica pair. Once the IO's completed from client, remounted the bricks and started glusterd which triggered the self heal. After that found entries on the SYNC brick indices directory. Checklogs are all zero's. Arequal matches between SOURCE and SYNC.

Just the entries found on the SYNC brick and is not removed. Not sure why the entries present on the SYNC bricks where the changelogs are all zero's


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-server-3.4.0.30rhs-2.el6rhs.x86_64


Steps Carried:
==============

1. Create and start 6*2 volume from 4 server nodes.
2. Mount the volume on 3.4.0 client(Fuse and NFS)
3. Mount the volume on 3.3.0 client(Fuse and NFS)
4. Create directories and files from all the mount points.
5. While writes are in progress from mount, did an lazy umount (umount -l) of one of the brick in each replica pair.
6. Once the writes are completed, remount the bricks.
7. Since the brick process did not start after mount, restarted the glusterd on the server where the bricks were unmounted.
8. self heal daemon starts self healing.
9. Noted that the entries are also found in the /indices/xattrop directory of the Source and SYNC bricks.
10. self heal got completed from source brick. arequal matches between the source and sync bricks. Entries got removed from source brick but is present from in the sync brick and change logs of those entries are all zero's.

Actual Result:
==============

[root@rhs-client14 ~]# ls /rhs/brick1/b12/.glusterfs/indices/xattrop/ | wc
    225     225    8333
[root@rhs-client14 ~]# ls /rhs/brick1/b12/.glusterfs/indices/xattrop/ | grep fe5188de-db14-4c1c-a84e-7d00de16ce5c
fe5188de-db14-4c1c-a84e-7d00de16ce5c
[root@rhs-client14 ~]# getfattr -d -e hex -m . /rhs/brick1/b12/.glusterfs/fe/51/fe5188de-db14-4c1c-a84e-7d00de16ce5c 
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/b12/.glusterfs/fe/51/fe5188de-db14-4c1c-a84e-7d00de16ce5c
trusted.afr.vol-dr-client-10=0x000000000000000000000000
trusted.afr.vol-dr-client-11=0x000000000000000000000000
trusted.gfid=0xfe5188dedb144c1ca84e7d00de16ce5c

[root@rhs-client14 ~]# 


Expected results:
=================
Entries should not have been present on the SYNC brick where the changelogs are all zero's.


Additional info:
================

Replace brick was performed on other set of brick in the same volume not to the bricks where the entries are present.

Comment 3 Pranith Kumar K 2015-03-18 08:30:59 UTC
Seems similar to http://review.gluster.com/9714

Comment 4 Vivek Agarwal 2015-12-03 17:13:49 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.