Bug 1121186 - DHT : If Directory deletion is in progress and lookup from another mount heals that Directory on sub-volumes. then rmdir/rm -rf on parents fails with error 'Directory not empty'
Summary: DHT : If Directory deletion is in progress and lookup from another mount heal...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: distribute
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.1.3
Assignee: Sakshi
QA Contact: krishnaram Karthick
URL:
Whiteboard: dht-rm-rf
Depends On: 1115367
Blocks: 1299184
TreeView+ depends on / blocked
 
Reported: 2014-07-18 15:08 UTC by Rachana Patel
Modified: 2016-08-01 01:22 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.9-4
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-23 04:53:27 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1240 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 Update 3 2016-06-23 08:51:28 UTC

Description Rachana Patel 2014-07-18 15:08:22 UTC
Description of problem:
=======================
when Directory deletion is in progress (completed on all or few non hashed sub-volume and not on hashed sub-volumes) and lookup comes from another mount point it heals Directory everywhere.(on non hashed sub-volumes from where it was deleted) After that deletion its parent fails with an error 'Directory not empty' and on listing parents content child Directory is not shown on mount as it is not healed on hashed sub-volume



Version-Release number :
=========================
3.6.0.24-1.el6rhs.x86_64


How reproducible:
=================
always


Steps to Reproduce:
====================
1. create and mount distributed volume. (mount on multiple client)
2. Directories on mount point.
[root@OVM1 gdb1]# mkdir -p 1/2/3

backend:-
[root@OVM3 ~]# tree  /var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick*/*/gdb1/ 
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick1/r1/gdb1/
└── 1
    └── 2
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick2/r2/gdb1/
└── 1
    └── 2
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick3/r3/gdb1/
└── 1
    └── 2
        └── 3
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick4/r4/gdb1/
└── 1
    └── 2
        └── 3
3. from one mount point execute 'rm -rf *' and from other mount point send lookup when Directory is not removed from hashed and removed from other sub-volume

-->mount1:-
[root@OVM1 gdb1]# rm -rf *

bricks:-
[root@OVM3 ~]# tree  /var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick*/*/gdb1/ 
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick1/r1/gdb1/
└── 1
    └── 2
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick2/r2/gdb1/
└── 1
    └── 2
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick3/r3/gdb1/
└── 1
    └── 2
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick4/r4/gdb1/
└── 1
    └── 2
        └── 3

---> send look up now
mount 2:-
[root@OVM1 gdb1]# ls -lR

[root@OVM3 ~]# tree  /var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick*/*/gdb1/ 
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick1/r1/gdb1/
└── 1
    └── 2
        └── 3
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick2/r2/gdb1/
└── 1
    └── 2
        └── 3
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick3/r3/gdb1/
└── 1
    └── 2
        └── 3
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick4/r4/gdb1/
└── 1
    └── 2
        └── 3

12 directories, 0 files

----> rm will fail on first mount:-

[root@OVM1 gdb1]# rm -rf *
rm: cannot remove `1/2': Directory not empty

mount:-
[root@OVM1 gdb1]# ls -lR
.:
total 0
drwxr-xr-x 3 root root 70 Jul 18 14:21 1

./1:
total 0
drwxr-xr-x 2 root root 54 Jul 18 14:23 2

./1/2:
total 0
<----------------------------- directory is not listed here but present in all non-hashed bricks

bricks:-
[root@OVM3 ~]# tree  /var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick*/*/gdb1/ 
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick1/r1/gdb1/
└── 1
    └── 2
        └── 3
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick2/r2/gdb1/
└── 1
    └── 2
        └── 3
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick3/r3/gdb1/
└── 1
    └── 2
        └── 3
/var/run/gluster/snaps/c8189009534a45aa85a0c66928498dca/brick4/r4/gdb1/
└── 1
    └── 2

11 directories, 0 files






Actual results:
===============
- Directory deletion fails with error 'not empty' but mount point did not show any Dir/files inside it


Expected results:
=================
Directory deletion should not fail with this error
- no two directories should have same gfid

Comment 2 Rachana Patel 2014-07-21 08:44:46 UTC
sorry but expected result should be

Expected results:
=================
Directory deletion should not fail with this error, If Directory is empty and it should delte Directory from all sub-volume

Comment 8 Raghavendra G 2016-04-26 04:08:56 UTC
Sakshi,

Isn't it that this bug, "Depends on" 1115367 (instead of blocks)? Since 1115367 is moved to MODIFIED, can we move this bug to MODIFIED?

regards,
Raghavendra.

Comment 9 Nithya Balachandran 2016-04-26 11:35:28 UTC
(In reply to Raghavendra G from comment #8)
> Sakshi,
> 
> Isn't it that this bug, "Depends on" 1115367 (instead of blocks)? Since
> 1115367 is moved to MODIFIED, can we move this bug to MODIFIED?
> 
> regards,
> Raghavendra.

Correct. Modified this BZ.

Comment 12 krishnaram Karthick 2016-05-12 11:01:04 UTC
Verified the fix in build - glusterfs-3.7.9-4 on both NFS and Fuse mounts separately. The issue reported in this bug was not seen. i.e., 'directory not empty' errors were not seen.

Tests that was run to validate the fix:

 - rm -rf + lookups from different mount points on the same directory with lots of sub-dirs

Comment 14 errata-xmlrpc 2016-06-23 04:53:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240


Note You need to log in before you can comment on or make changes to this bug.