Bug 1283045 - Index entries are not being purged in case of file does not exist
Summary: Index entries are not being purged in case of file does not exist
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
: RHGS 3.4.0
Assignee: Ashish Pandey
QA Contact: Vijay Avuthu
URL:
Whiteboard: rebase
Depends On: 1270668
Blocks: 1283036 1503134
TreeView+ depends on / blocked
 
Reported: 2015-11-18 05:31 UTC by Ashish Pandey
Modified: 2018-09-17 14:21 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.12.2-1
Doc Type: Bug Fix
Doc Text:
Clone Of: 1270668
Environment:
Last Closed: 2018-09-04 06:26:58 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 0 None None None 2018-09-04 06:29:05 UTC

Description Ashish Pandey 2015-11-18 05:31:27 UTC
+++ This bug was initially created as a clone of Bug #1270668 +++

Description of problem:
If a file does not exist on a brick but the index entry is present in XXX/.glusterfs/indices/xattrop, this entry is not getting removed even after 
running index heal.

Version-Release number of selected component (if applicable):

[root@aspandey:/home/blr/aspandey/glusterfs]# glusterfs --version
glusterfs 3.8dev built on Oct 12 2015 10:40:32
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
[root@aspandey:/home/blr/aspandey/glusterfs]# 


How reproducible:
100%

Steps to Reproduce:
1. Create a replicate volume, mount it and create some files on mount point.
2. add bricks and before executing rebalance, kill one of the new add bricks
3. execute rebalance.
4. add brick again and execute rebalance (some files from written on bricks in [2] will be rebalanced to bricks added in [4]) 
5. start the volume with force
6. start heal
7. heal info shows nothing to be healed  
8. Look into XXX/.glusterfs/indices/xattrop in one of the brick added in [2]. Index entries would still be present.


Actual results:
Indices are present for the file which does not exist on volume 

Expected results:

No indices should be present.

Additional info:


[root@aspandey:/home/blr/aspandey/glusterfs]# gluster v heal repvol info
Brick aspandey:/brick/gluster/R1
Number of entries: 0

Brick aspandey:/brick/gluster/R2
Number of entries: 0

Brick aspandey:/brick/gluster/R11
Number of entries: 0

Brick aspandey:/brick/gluster/R12
Number of entries: 0

Brick aspandey:/brick/gluster/R111
Number of entries: 0

Brick aspandey:/brick/gluster/R112
Number of entries: 0

[root@aspandey:/home/blr/aspandey/glusterfs]# ll /brick/gluster/R11/.glusterfs/indices/xattrop/
total 0
----------. 4 root root 0 Oct 12 10:48 26b306f3-7204-461b-9b61-3147651ee181
----------. 4 root root 0 Oct 12 10:48 2e37a37f-ce9f-4ccc-8c9c-fb4eefded44a
----------. 4 root root 0 Oct 12 10:48 2ea426f0-0e7a-43a3-ae1c-b9463be4062e
----------. 4 root root 0 Oct 12 10:48 xattrop-e5af794a-3622-47be-8b9d-65a9f211975d
[root@aspandey:/home/blr/aspandey/glusterfs]# gluster v info
 
Volume Name: repvol
Type: Distributed-Replicate
Volume ID: e33f7dc2-3a21-4b0f-9965-f039f6dbb003
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: aspandey:/brick/gluster/R1
Brick2: aspandey:/brick/gluster/R2
Brick3: aspandey:/brick/gluster/R11
Brick4: aspandey:/brick/gluster/R12
Brick5: aspandey:/brick/gluster/R111
Brick6: aspandey:/brick/gluster/R112
Options Reconfigured:
performance.readdir-ahead: on

--- Additional comment from Vijay Bellur on 2015-10-12 04:24:19 EDT ---

REVIEW: http://review.gluster.org/12336 (cluster/afr : Remove index entries if corresponding               file does not exist.) posted (#1) for review on master by Ashish Pandey (aspandey)

--- Additional comment from Vijay Bellur on 2015-10-16 04:28:12 EDT ---

REVIEW: http://review.gluster.org/12336 (cluster/afr : Remove index entries if corresponding               file does not exist.) posted (#2) for review on master by Ashish Pandey (aspandey)

--- Additional comment from Vijay Bellur on 2015-10-29 12:05:12 EDT ---

REVIEW: http://review.gluster.org/12336 (cluster/afr : Remove stale indices) posted (#3) for review on master by Ashish Pandey (aspandey)

--- Additional comment from Vijay Bellur on 2015-11-16 03:20:12 EST ---

COMMIT: http://review.gluster.org/12336 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit 92e3bbbad803688a4dbcbab6bcd35867aa055da1
Author: Ashish Pandey <aspandey>
Date:   Mon Oct 12 13:14:08 2015 +0530

    cluster/afr : Remove stale indices
    
    Change-Id: Iba23338a452b49dc9fe6ae7b4ca108ebc377fe42
    BUG: 1270668
    Signed-off-by: Ashish Pandey <aspandey>
    Reviewed-on: http://review.gluster.org/12336
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>

Comment 2 Mike McCune 2016-03-28 22:16:42 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 7 Vijay Avuthu 2018-04-04 15:30:58 UTC
Update:
===========

Verified with build : glusterfs-3.12.2-6.el7rhgs.x86_64

Steps followed:

1) create 2 * 3 volume and start
2) create 1000 files from client
3) Add bricks ( DONOT START rebalance )
4) pick up 1 brick which was added in step 3) and kill the brick
5) start the rebalance and wait untill rebalance completes
6) Again add bricks to the volume
7) Start rebalance and wait untill rebalance completes
8) Start the volume with force and wait for the self-heal to complete
9) check for the index entries in xattrop in the bricks which was added in step 3

Result : No index entries present in xattrop on the bricks which was added in step 3

[root@dhcp35-61 glusterfs]# ls /bricks/brick3/testvol_distributed-replicated_brick6/.glusterfs/indices/xattrop
xattrop-468f5bdb-69c2-4cc1-9756-df1022a26cdf
[root@dhcp35-61 glusterfs]# 

[root@dhcp35-174 yum.repos.d]# ls /bricks/brick3/testvol_distributed-replicated_brick7/.glusterfs/indices/xattrop
xattrop-7d7e1fe2-014d-4330-a70b-1d9a35eaa6e4
[root@dhcp35-174 yum.repos.d]# 


[root@dhcp35-17 yum.repos.d]# ls /bricks/brick3/testvol_distributed-replicated_brick8/.glusterfs/indices/xattrop
xattrop-d1d8067d-c871-455a-b211-f1d67925d6d8
[root@dhcp35-17 yum.repos.d]# 

> changing status to verified

Comment 9 errata-xmlrpc 2018-09-04 06:26:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.