Bug 1243369
Summary: | I/O loses file handle as brick graph is refreshed by enabling quota | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Amit Chaurasia <achauras> |
Component: | quota | Assignee: | Manikandan <mselvaga> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | storage-qa-internal <storage-qa-internal> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.1 | CC: | mzywusko, rcyriac, rhinduja, rhs-bugs, sankarshan, smohan, storage-qa-internal, vmallika |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-04-14 08:17:56 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Amit Chaurasia
2015-07-15 10:15:41 UTC
[root@dht-rhs-20 ~]# gluster v info Volume Name: testvol Type: Distribute Volume ID: a97abb32-c23b-43c7-9f11-31dd466b8954 Status: Started Number of Bricks: 4 Transport-type: tcp Bricks: Brick1: 10.70.47.98:/bricks/brick0/testvol Brick2: 10.70.47.99:/bricks/brick0/testvol Brick3: 10.70.47.98:/bricks/brick1/testvol Brick4: 10.70.47.99:/bricks/brick1/testvol Options Reconfigured: features.quota-deem-statfs: on features.inode-quota: on features.quota: on performance.readdir-ahead: on [root@dht-rhs-20 ~]# dht-rhs-19 : 10.70.47.98 dht-rhs-20 : 10.70.47.99 clients are mounted as Fuse and NFS. Clients also contain older version packages. On one of the client: [amit@amit-lappy test_dir]$ rpm -qa | grep -i gluster glusterfs-libs-3.5.3-1.fc21.x86_64 glusterfs-fuse-3.5.3-1.fc21.x86_64 glusterfs-3.5.3-1.fc21.x86_64 glusterfs-api-3.5.3-1.fc21.x86_64 [amit@amit-lappy test_dir]$ On server: [root@dht-rhs-20 test_dir]# rpm -qa | grep glusterglusterfs-client-xlators-3.7.1-9.el6rhs.x86_64 glusterfs-server-3.7.1-9.el6rhs.x86_64 glusterfs-3.7.1-9.el6rhs.x86_64 glusterfs-fuse-3.7.1-9.el6rhs.x86_64 glusterfs-cli-3.7.1-9.el6rhs.x86_64 glusterfs-libs-3.7.1-9.el6rhs.x86_64 glusterfs-api-3.7.1-9.el6rhs.x86_64 [root@dht-rhs-20 test_dir]# [root@dht-rhs-20 test_dir]# mount /dev/vda2 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/vda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) /dev/mapper/VG01-LV00 on /bricks/brick0 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV01 on /bricks/brick1 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV02 on /bricks/brick2 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV03 on /bricks/brick3 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV04 on /bricks/brick4 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV05 on /bricks/brick5 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV06 on /bricks/brick6 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV07 on /bricks/brick7 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV08 on /bricks/brick8 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV09 on /bricks/brick9 type xfs (rw,noatime,nodiratime,inode64) localhost:testvol on /var/run/gluster/testvol type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) 10.70.47.98:/testvol on /mnt/gluster type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) [root@dht-rhs-20 test_dir]# ====== [root@dht-rhs-19 test_dir]# mount /dev/vda2 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/vda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) /dev/mapper/VG01-LV00 on /bricks/brick0 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV01 on /bricks/brick1 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV02 on /bricks/brick2 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV03 on /bricks/brick3 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV04 on /bricks/brick4 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV05 on /bricks/brick5 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV06 on /bricks/brick6 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV07 on /bricks/brick7 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV08 on /bricks/brick8 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/VG01-LV09 on /bricks/brick9 type xfs (rw,noatime,nodiratime,inode64) localhost:testvol on /var/run/gluster/testvol type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) 10.70.47.99:/testvol on /mnt/glusterfs type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) 10.70.47.99:/testvol on /mnt/nfs type nfs (rw,addr=10.70.47.99) [root@dht-rhs-19 test_dir]# ============= [root@dht-rhs-20 ~]# gluster v status all Status of volume: testvol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.47.98:/bricks/brick0/testvol 49650 0 Y 2791 Brick 10.70.47.99:/bricks/brick0/testvol 49277 0 Y 2884 Brick 10.70.47.98:/bricks/brick1/testvol 49651 0 Y 2871 Brick 10.70.47.99:/bricks/brick1/testvol 49278 0 Y 2962 NFS Server on localhost 2049 0 Y 2983 Quota Daemon on localhost N/A N/A Y 13997 NFS Server on 10.70.47.98 2049 0 Y 2891 Quota Daemon on 10.70.47.98 N/A N/A Y 17817 Task Status of Volume testvol ------------------------------------------------------------------------------ There are no active volume tasks [root@dht-rhs-20 ~]# Sosreports are available @http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1243369/ Hi, We tested the same in 3.7.11. There are lot of changes that we have made in rename code path that fixes this issue as well. -- Thanks, Manikandan. |