Bug 764899 (GLUSTER-3167) - after rebalance, stale nfs filehandle issue is seen
Summary: after rebalance, stale nfs filehandle issue is seen
Keywords:
Status: CLOSED WORKSFORME
Alias: GLUSTER-3167
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.2.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: shishir gowda
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-07-13 14:04 UTC by Saurabh
Modified: 2013-12-09 01:26 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Saurabh 2011-07-13 14:04:57 UTC
I have distribute replicate with 3 way replication,

and adding brick with a rebalance I am having issues,

the below logs mention them,


root@Unbuntu:/mnt/swift/drep31/cont1# 
root@Unbuntu:/mnt/swift/drep31/cont1# /root/glusterfs/inst/sbin/gluster volume add-brick
Usage: volume add-brick <VOLNAME> <NEW-BRICK> ...
root@Unbuntu:/mnt/swift/drep31/cont1# /root/glusterfs/inst/sbin/gluster volume add-brick drep31 10.1.12.25:/mnt/sdb2/add-drep31 10.1.12.25:/mnt/sdb2/add-d-drep31 10.1.12.25:/mnt/sdb2/add-dd-drep31
Add Brick successful
root@Unbuntu:/mnt/swift/drep31/cont1# /root/glusterfs/inst/sbin/gluster volume rebalance
Usage: volume rebalance <VOLNAME> [fix-layout|migrate-data] {start|stop|status}
root@Unbuntu:/mnt/swift/drep31/cont1# /root/glusterfs/inst/sbin/gluster volume rebalance drep31 start
starting rebalance on volume drep31 has been successful
root@Unbuntu:/mnt/swift/drep31/cont1# /root/glusterfs/inst/sbin/gluster volume rebalance drep31 status
rebalance step 2: data migration in progress: rebalanced 15 files of size 7500 (total files scanned 21)
root@Unbuntu:/mnt/swift/drep31/cont1# /root/glusterfs/inst/sbin/gluster volume rebalance drep31 status
rebalance step 2: data migration in progress: rebalanced 15 files of size 7500 (total files scanned 21)
root@Unbuntu:/mnt/swift/drep31/cont1# /root/glusterfs/inst/sbin/gluster volume rebalance drep31 status
rebalance step 2: data migration in progress: rebalanced 15 files of size 7500 (total files scanned 21)
root@Unbuntu:/mnt/swift/drep31/cont1# /root/glusterfs/inst/sbin/gluster volume rebalance drep31 status
rebalance step 2: data migration in progress: rebalanced 15 files of size 7500 (total files scanned 21)
root@Unbuntu:/mnt/swift/drep31/cont1# /root/glusterfs/inst/sbin/gluster volume rebalance drep31 status
rebalance step 2: data migration in progress: rebalanced 15 files of size 7500 (total files scanned 21)
root@Unbuntu:/mnt/swift/drep31/cont1# /root/glusterfs/inst/sbin/gluster volume rebalance drep31 status
rebalance step 2: data migration in progress: rebalanced 15 files of size 7500 (total files scanned 21)
root@Unbuntu:/mnt/swift/drep31/cont1# /root/glusterfs/inst/sbin/gluster volume rebalance drep31 status
rebalance step 2: data migration in progress: rebalanced 15 files of size 7500 (total files scanned 21)
root@Unbuntu:/mnt/swift/drep31/cont1# 
root@Unbuntu:/mnt/swift/drep31/cont1# 
root@Unbuntu:/mnt/swift/drep31/cont1# /root/glusterfs/inst/sbin/gluster volume rebalance drep31 status
rebalance completed: rebalanced 60 files of size 1045836488 (total files scanned 107)
root@Unbuntu:/mnt/swift/drep31/cont1# 
root@Unbuntu:/mnt/swift/drep31/cont1# 
root@Unbuntu:/mnt/swift/drep31/cont1# 
root@Unbuntu:/mnt/swift/drep31/cont1# /root/glusterfs/inst/sbin/gluster volume rebalance drep31 status
rebalance completed: rebalanced 60 files of size 1045836488 (total files scanned 107)
root@Unbuntu:/mnt/swift/drep31/cont1# 
root@Unbuntu:/mnt/swift/drep31/cont1# 
root@Unbuntu:/mnt/swift/drep31/cont1# 
root@Unbuntu:/mnt/swift/drep31/cont1# ls
ls: cannot access 1GBfile: Stale NFS file handle
ls: cannot access f.8: Stale NFS file handle
ls: cannot access f.5: Stale NFS file handle
1GBfile  dir1   dir2  dir4  dir6  dir8  f.0  f.10  f.12  f.16  f.18  f.2   f.3  f.5  f.7  f.9
dir      dir10  dir3  dir5  dir7  dir9  f.1  f.11  f.13  f.17  f.19  f.20  f.4  f.6  f.8
root@Unbuntu:/mnt/swift/drep31/cont1# ls -li f.5
ls: cannot access f.5: No such file or directory
root@Unbuntu:/mnt/swift/drep31/cont1# 
root@Unbuntu:/mnt/swift/drep31/cont1# 
root@Unbuntu:/mnt/swift/drep31/cont1# 
root@Unbuntu:/mnt/swift/drep31/cont1# 
root@Unbuntu:/mnt/swift/drep31/cont1# ls -li f.5
ls: cannot access f.5: No such file or directory
root@Unbuntu:/mnt/swift/drep31/cont1# ls
ls: cannot access 1GBfile: Stale NFS file handle
ls: cannot access f.8: Stale NFS file handle
ls: cannot access f.5: Stale NFS file handle
1GBfile  dir1   dir2  dir4  dir6  dir8  f.0  f.10  f.12  f.16  f.18  f.2   f.3  f.5  f.7  f.9
dir      dir10  dir3  dir5  dir7  dir9  f.1  f.11  f.13  f.17  f.19  f.20  f.4  f.6  f.8
root@Unbuntu:/mnt/swift/drep31/cont1# ls
ls: cannot access 1GBfile: Stale NFS file handle
ls: cannot access f.8: Stale NFS file handle
ls: cannot access f.5: Stale NFS file handle
1GBfile  dir1   dir2  dir4  dir6  dir8  f.0  f.10  f.12  f.16  f.18  f.2   f.3  f.5  f.7  f.9
dir      dir10  dir3  dir5  dir7  dir9  f.1  f.11  f.13  f.17  f.19  f.20  f.4  f.6  f.8
root@Unbuntu:/mnt/swift/drep31/cont1# mount | grep drep31
localhost:drep31 on /mnt/swift/drep31 type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
root@Unbuntu:/mnt/swift/drep31/cont1# /root/glusterfs/inst/sbin/gluster volume info drep31

Volume Name: drep31
Type: Distribute
Status: Started
Number of Bricks: 9
Transport-type: tcp
Bricks:
Brick1: 10.1.12.25:/mnt/sdb2/drep31
Brick2: 10.1.12.25:/mnt/sdb2/d-drep31
Brick3: 10.1.12.25:/mnt/sdb2/dd-drep31
Brick4: 10.1.12.26:/mnt/sdb2/drep31
Brick5: 10.1.12.26:/mnt/sdb2/d-drep31
Brick6: 10.1.12.26:/mnt/dd-drep31
Brick7: 10.1.12.25:/mnt/sdb2/add-drep31
Brick8: 10.1.12.25:/mnt/sdb2/add-d-drep31
Brick9: 10.1.12.25:/mnt/sdb2/add-dd-drep31
Options Reconfigured:
features.quota: off
root@Unbuntu:/mnt/swift/drep31/cont1# 


Note: the data was created using swift(object-storage)

Comment 1 shishir gowda 2011-07-15 05:14:15 UTC
I am not able to reproduce the issue on the latest 3.2.2 branch or master branch.

If you run into the issue, please reopen the bug, and save the state info.


Note You need to log in before you can comment on or make changes to this bug.