Bug 1158156 - Quota: Cannot remove hard links with quota enabled, rm -rf errors out with "Directory not empty"
Summary: Quota: Cannot remove hard links with quota enabled, rm -rf errors out with "D...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: 2.1
Hardware: x86_64
OS: All
unspecified
high
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Ben Turner
URL:
Whiteboard:
Depends On:
Blocks: 1286090
TreeView+ depends on / blocked
 
Reported: 2014-10-28 18:20 UTC by Ben Turner
Modified: 2015-11-27 10:43 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1286090 (view as bug list)
Environment:
Last Closed: 2015-11-27 10:42:38 UTC
Embargoed:


Attachments (Terms of Use)

Description Ben Turner 2014-10-28 18:20:28 UTC
Description of problem:

When I try to run rm -rf /gluster-mount/* deleting hard links errors out with:

:: [  BEGIN   ] :: Data deleted. :: actually running 'rm -rf /quota-mount/*'
rm: cannot remove `/quota-mount/hardlinks-destination': Directory not empty
rm: cannot remove `/quota-mount/hardlinks-sources': Directory not empty
:: [   FAIL   ] :: Data deleted. (Expected 0, got 1)

Version-Release number of selected component (if applicable):

Servers - glusterfs-3.4.0.69rhs-1.el6rhs.x86_64
Client - glusterfs-3.6.0.29-2.el6.x86_64

How reproducible:

Every time I have run.

Steps to Reproduce:
1.  Mount a volume.
2.  Create a hard link from one directory to anther.
3.  Run rm -rf /gluster-mount/*

Actual results:

The rm -rf command errors out with Directory not empty.

Expected results:

The rm -rf is successful and everything is removed.

Additional info:

Comment 1 Ben Turner 2014-10-28 18:23:20 UTC
Simple repro:

[root@gqac031 ~]# cd /gluster-mount/
[root@gqac031 gluster-mount]# mkdir src
[root@gqac031 gluster-mount]# mkdir dest
[root@gqac031 gluster-mount]# echo "Hello" > /gluster-mount/src/test
[root@gqac031 gluster-mount]# ln /gluster-mount/src/test /gluster-mount/dest/rawr
[root@gqac031 gluster-mount]# rm -rf ./*
[root@gqac031 gluster-mount]# rm -rf *
rm: cannot remove `dest': Directory not empty
rm: cannot remove `src': Directory not empty
[root@gqac031 gluster-mount]# rm dest/rawr 
rm: remove regular file `dest/rawr'? y
[root@gqac031 gluster-mount]# rm src/test 
rm: remove regular file `src/test'? y
[root@gqac031 gluster-mount]# rm -rf ./*
rm: cannot remove `./dest': Directory not empty
rm: cannot remove `./src': Directory not empty
-rw-r--r-- 2 root root 6 Oct 28 14:18 test
[root@gqac031 src]# cd src
[root@gqac031 src]# rm test 
rm: remove regular file `test'? y
[root@gqac031 src]# ll
total 1
-rw-r--r-- 2 root root 6 Oct 28 14:18 test

So it looks like the file I am hard linking is not getting deleted.

Comment 2 Ben Turner 2014-10-28 18:30:20 UTC
From the client logs:

[2014-10-28 18:22:22.628057] W [fuse-bridge.c:1298:fuse_unlink_cbk] 0-glusterfs-fuse: 200: UNLINK() /src/test => -1 (Success)
[2014-10-28 18:24:50.273132] W [fuse-bridge.c:1298:fuse_unlink_cbk] 0-glusterfs-fuse: 217: UNLINK() /src/test => -1 (Success)
[2014-10-28 18:24:57.575455] W [fuse-bridge.c:1298:fuse_unlink_cbk] 0-glusterfs-fuse: 221: UNLINK() /src/test => -1 (Success)
[2014-10-28 18:25:16.482484] W [fuse-bridge.c:1298:fuse_unlink_cbk] 0-glusterfs-fuse: 226: UNLINK() /src/test => -1 (Success)
[2014-10-28 18:25:40.634906] I [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status] 0-testvol-replicate-0:  metadata self heal  is successfully completed,   metadata self heal from source testvol-client-0 to testvol-client-1,  metadata - Pending matrix:  [ [ 0 0 ] [ 0 0 ] ], on /
[2014-10-28 18:25:42.211802] I [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status] 0-testvol-replicate-0:  metadata self heal  is successfully completed,   metadata self heal from source testvol-client-0 to testvol-client-1,  metadata - Pending matrix:  [ [ 0 0 ] [ 0 0 ] ], on /
[2014-10-28 18:25:42.218508] I [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status] 0-testvol-replicate-0:  metadata self heal  is successfully completed,   metadata self heal from source testvol-client-0 to testvol-client-1,  metadata - Pending matrix:  [ [ 0 0 ] [ 0 0 ] ], on /
[2014-10-28 18:25:42.222353] I [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status] 0-testvol-replicate-0:  metadata self heal  is successfully completed,   metadata self heal from source testvol-client-0 to testvol-client-1,  metadata - Pending matrix:  [ [ 0 0 ] [ 0 0 ] ], on /dest
[2014-10-28 18:25:51.771628] W [fuse-bridge.c:1298:fuse_unlink_cbk] 0-glusterfs-fuse: 254: UNLINK() /dest/rawr => -1 (Success)
[2014-10-28 18:29:28.621680] I [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status] 0-testvol-replicate-0:  metadata self heal  is successfully completed,   metadata self heal from source testvol-client-0 to testvol-client-1,  metadata - Pending matrix:  [ [ 0 0 ] [ 0 0 ] ], on /

Is the file only getting removed from one subvolume and getting self healed?

Comment 3 Ben Turner 2014-10-28 18:40:09 UTC
And on the server:

[2014-10-28 18:18:20.838749] E [posix.c:199:posix_lookup] 0-testvol-posix: buf->ia_gfid is null for /bricks/testvol_brick0/..
[2014-10-28 18:18:20.838879] E [marker-quota.c:1819:mq_fetch_child_size_and_contri] (-->/usr/lib64/glusterfs/3.4.0.69rhs/xlator/features/changelog.so(changelog_setxattr_cbk+0xe3) [0x7fbd68b1c953] (-->/usr/lib64/glusterfs/3.4.0.69rhs/xlator/features/access-control.so(posix_acl_setxattr_cbk+0xb9) [0x7fbd686f71a9] (-->/usr/lib64/glusterfs/3.4.0.69rhs/xlator/performance/io-threads.so(iot_setxattr_cbk+0xb9) [0x7fbd682c9519]))) 0-: Assertion failed: !"uuid null"
[2014-10-28 18:18:20.838939] E [posix.c:199:posix_lookup] 0-testvol-posix: buf->ia_gfid is null for /bricks/testvol_brick0/..
[2014-10-28 18:18:20.838960] W [marker-quota.c:1641:mq_update_inode_contribution] 0-testvol-marker: failed to get size and contribution of path (/..)(No data available)
[2014-10-28 18:18:20.839018] W [marker-quota.c:1405:mq_release_parent_lock] (-->/usr/lib64/glusterfs/3.4.0.69rhs/xlator/performance/io-threads.so(iot_lookup_cbk+0xd9) [0x7fbd682cb699] (-->/usr/lib64/libglusterfs.so.0(default_lookup_cbk+0xd9) [0x7fbd6fb2f369] (-->/usr/lib64/glusterfs/3.4.0.69rhs/xlator/features/marker.so(mq_update_inode_contribution+0x447) [0x7fbd63df3b47]))) 0-testvol-marker: An operation during quota updation of path (/..) failed (No data available)
[2014-10-28 18:18:20.839826] W [inode.c:911:inode_lookup] (-->/usr/lib64/glusterfs/3.4.0.69rhs/xlator/debug/io-stats.so(io_stats_readdirp_cbk+0x156) [0x7fbd639b9426] (-->/usr/lib64/glusterfs/3.4.0.69rhs/xlator/protocol/server.so(server_readdirp_cbk+0xc3) [0x7fbd6378f313] (-->/usr/lib64/libglusterfs.so.0(gf_link_inodes_from_dirent+0x4a) [0x7fbd6fb4644a]))) 0-testvol-server: inode not found
[2014-10-28 18:18:23.728585] E [posix.c:132:posix_lookup] 0-testvol-posix: null gfid for path /./.glusterfs
[2014-10-28 18:18:23.728589] E [posix.c:132:posix_lookup] 0-testvol-posix: null gfid for path /./src
[2014-10-28 18:18:23.728592] E [posix.c:132:posix_lookup] 0-testvol-posix: null gfid for path /./dest
[2014-10-28 18:18:23.728627] E [posix.c:149:posix_lookup] 0-testvol-posix: lstat on (null) failed: Invalid argument
[2014-10-28 18:18:23.728657] E [posix.c:149:posix_lookup] 0-testvol-posix: lstat on (null) failed: Invalid argument

Comment 5 Susant Kumar Palai 2015-11-27 10:42:38 UTC
Cloning to 3.1. To be fixed in future release.


Note You need to log in before you can comment on or make changes to this bug.