Bug 794652 - rm -rf on a directory failed while more bricks were added to the system
Summary: rm -rf on a directory failed while more bricks were added to the system
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: pre-release
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: ---
Assignee: Amar Tumballi
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-02-17 08:16 UTC by M S Vishwanath Bhat
Modified: 2016-06-01 01:55 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-02-22 12:48:16 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
client log file (383.26 KB, text/x-log)
2012-02-17 13:30 UTC, M S Vishwanath Bhat
no flags Details

Description M S Vishwanath Bhat 2012-02-17 08:16:44 UTC
Description of problem:
Create 2 node replicate volume and created dir tree. Now ran rm -rf dir and meanwhile added two more bricks to volume, thus making it distributed-replicated volume. rm -rf failed.

Version-Release number of selected component (if applicable):
glusterfs-3.3.0qa22

How reproducible:
thrice in as many tries

Steps to Reproduce:
1. Create ans start 2 node pure replicate volume.
2. Create a dir tree.
3. Now rm -rf dir
4. add two more bricks to volume.
  
Actual results:
rm -rf failed fts_read failed message.

[root@RHEL6 mnt]# time rm -rf a
rm: fts_read failed: No such file or directory

real    2m22.360s
user    0m0.005s
sys     0m0.024s


Expected results:
rm-rf should succeed.

Additional info:

Entries in client log.


[2012-02-17 01:33:16.057531] I [client-handshake.c:1102:select_server_supported_programs] 1-hosdu-client-0: Using Program GlusterFS 3.3.0qa22, Num (1298437), Version (330)
[2012-02-17 01:33:16.057872] I [client-handshake.c:923:client_setvolume_cbk] 1-hosdu-client-0: Connected to 10.1.11.113:24009, attached to remote volume '/data/bricks/hosdu_brick1'.
[2012-02-17 01:33:16.057895] I [afr-common.c:3461:afr_notify] 1-hosdu-replicate-0: subvol 0 came up, start crawl
[2012-02-17 01:33:16.057905] I [afr-common.c:3556:afr_notify] 1-hosdu-replicate-0: All subvolumes came up, start crawl
[2012-02-17 01:33:16.058266] I [client-handshake.c:923:client_setvolume_cbk] 1-hosdu-client-2: Connected to 10.1.11.136:24009, attached to remote volume '/data/bricks/hosdu_brick3'.
[2012-02-17 01:33:16.058288] I [afr-common.c:3457:afr_notify] 1-hosdu-replicate-1: Subvolume 'hosdu-client-2' came back up; going online.
[2012-02-17 01:33:16.063598] I [fuse-bridge.c:3699:fuse_graph_setup] 0-fuse: switched to graph 1
[2012-02-17 01:33:16.137570] I [afr-common.c:1827:afr_set_root_inode_on_first_lookup] 1-hosdu-replicate-0: added root inode
[2012-02-17 01:33:16.182019] I [afr-common.c:1827:afr_set_root_inode_on_first_lookup] 1-hosdu-replicate-1: added root inode
[2012-02-17 01:33:16.272465] W [fuse-resolve.c:148:fuse_resolve_gfid_cbk] 0-fuse: c9ebb69a-84da-47da-8411-601031535209: failed to resolve (Invalid argument)
[2012-02-17 01:33:16.272509] E [fuse-bridge.c:519:fuse_getattr_resume] 0-glusterfs-fuse: 53145: GETATTR 140579743018596 (c9ebb69a-84da-47da-8411-601031535209) resolution failed
[2012-02-17 01:33:16.332797] W [fuse-resolve.c:148:fuse_resolve_gfid_cbk] 0-fuse: c9ebb69a-84da-47da-8411-601031535209: failed to resolve (Invalid argument)
[2012-02-17 01:33:16.332842] E [fuse-bridge.c:519:fuse_getattr_resume] 0-glusterfs-fuse: 53146: GETATTR 140579743018596 (c9ebb69a-84da-47da-8411-601031535209) resolution failed
[2012-02-17 01:33:16.403585] W [fuse-resolve.c:148:fuse_resolve_gfid_cbk] 0-fuse: 9b8da545-c674-4848-b8ba-493a8d271543: failed to resolve (Invalid argument)
[2012-02-17 01:33:16.403633] E [fuse-bridge.c:1358:fuse_rmdir_resume] 0-glusterfs-fuse: RMDIR 140579743018448 (00000000-0000-0000-0000-000000000000/a) resolution failed
[2012-02-17 01:33:16.465284] W [fuse-resolve.c:148:fuse_resolve_gfid_cbk] 0-fuse: fd08b15a-281c-409a-9a23-ab59edd855dd: failed to resolve (Invalid argument)
[2012-02-17 01:33:16.465330] E [fuse-bridge.c:1358:fuse_rmdir_resume] 0-glusterfs-fuse: RMDIR 140579743018300 (00000000-0000-0000-0000-000000000000/a) resolution failed
[2012-02-17 01:33:16.526687] W [fuse-resolve.c:148:fuse_resolve_gfid_cbk] 0-fuse: 49fe2993-a5a9-4bcf-9273-1f9de8e3369d: failed to resolve (Invalid argument)
[2012-02-17 01:33:16.526732] E [fuse-bridge.c:519:fuse_getattr_resume] 0-glusterfs-fuse: 54225: GETATTR 140579743018152 (49fe2993-a5a9-4bcf-9273-1f9de8e3369d) resolution failed
[2012-02-17 01:33:16.587728] W [fuse-resolve.c:148:fuse_resolve_gfid_cbk] 0-fuse: 1d25c2a1-96f4-406a-b957-e91044c3cb5a: failed to resolve (Invalid argument)
[2012-02-17 01:33:16.587798] E [fuse-bridge.c:519:fuse_getattr_resume] 0-glusterfs-fuse: 54227: GETATTR 140579743018004 (1d25c2a1-96f4-406a-b957-e91044c3cb5a) resolution failed
[2012-02-17 01:33:16.653459] W [fuse-resolve.c:148:fuse_resolve_gfid_cbk] 0-fuse: 1d25c2a1-96f4-406a-b957-e91044c3cb5a: failed to resolve (Invalid argument)
[2012-02-17 01:33:16.653507] E [fuse-bridge.c:519:fuse_getattr_resume] 0-glusterfs-fuse: 54228: GETATTR 140579743018004 (1d25c2a1-96f4-406a-b957-e91044c3cb5a) resolution failed
[2012-02-17 01:33:19.060785] I [client-handshake.c:1102:select_server_supported_programs] 1-hosdu-client-3: Using Program GlusterFS 3.3.0qa22, Num (1298437), Version (330)
[2012-02-17 01:33:19.063650] I [client-handshake.c:923:client_setvolume_cbk] 1-hosdu-client-3: Connected to 10.1.11.137:24009, attached to remote volume '/data/bricks/hosdu_brick4'.
[2012-02-17 01:33:19.063679] I [afr-common.c:3461:afr_notify] 1-hosdu-replicate-1: subvol 1 came up, start crawl

I was able to hit this issue three times in as many tries.

Comment 1 M S Vishwanath Bhat 2012-02-17 13:30:14 UTC
Created attachment 563903 [details]
client log file

This is happening consistently. Attaching the client log file.

Comment 2 Amar Tumballi 2012-02-22 04:10:42 UTC
please check the behavior with 3.3.0qa23

Comment 3 M S Vishwanath Bhat 2012-02-22 12:48:16 UTC
I don't see the issue in glusterfs-3.3.0qa23 release, So marking the bug closed upstream.


Note You need to log in before you can comment on or make changes to this bug.