Bug 977483 - quota: test with more than one volume fails
quota: test with more than one volume fails
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.1
x86_64 Linux
high Severity high
: ---
: ---
Assigned To: vpshastry
Saurabh
: TestBlocker
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-06-24 12:32 EDT by Saurabh
Modified: 2016-01-19 01:12 EST (History)
8 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.12rhs.beta6-1
Doc Type: Bug Fix
Doc Text:
Cause: Same root inode is used by two threads. Since the root-inode is modified (like DHT to set layouts) in the call and sharing the same would mess up. Consequence: This should have hit even in the single volume setup, but hidden by thread scheduler. Fix: Using single thread. Result:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-23 18:29:52 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2013-06-24 12:32:39 EDT
Description of problem:

basically the file creation for volume stops, whereas for other it keeps going on.

Also the gluster volume quota list and du -sh does not match for both volumes.

also, status command show nfs server is down but actually it is running.

test case 
https://tcms.engineering.redhat.com/case/275950/?from_plan=9473
fails

gluster v i
[root@quota1 ~]# gluster v i
 
Volume Name: dist-rep
Type: Distributed-Replicate
Volume ID: 16ce5fbe-f72a-41c4-b3da-282d78acc1c0
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.98:/rhs/bricks/d1r1
Brick2: 10.70.37.174:/rhs/bricks/d1r2
Brick3: 10.70.37.136:/rhs/bricks/d2r1
Brick4: 10.70.37.168:/rhs/bricks/d2r2
Brick5: 10.70.37.98:/rhs/bricks/d3r1
Brick6: 10.70.37.174:/rhs/bricks/d3r2
Brick7: 10.70.37.136:/rhs/bricks/d4r1
Brick8: 10.70.37.168:/rhs/bricks/d4r2
Brick9: 10.70.37.98:/rhs/bricks/d5r1
Brick10: 10.70.37.174:/rhs/bricks/d5r2
Brick11: 10.70.37.136:/rhs/bricks/d6r1
Brick12: 10.70.37.168:/rhs/bricks/d6r2
Options Reconfigured:
features.limit-usage: /:5GB
features.quota: on
 
Volume Name: dist-rep2
Type: Distributed-Replicate
Volume ID: e9c5cbec-edea-41f9-a446-d92d6d5031b2
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.98:/rhs/bricks/d1r12
Brick2: 10.70.37.174:/rhs/bricks/d1r22
Brick3: 10.70.37.136:/rhs/bricks/d2r12
Brick4: 10.70.37.168:/rhs/bricks/d2r22
Brick5: 10.70.37.98:/rhs/bricks/d3r12
Brick6: 10.70.37.174:/rhs/bricks/d3r22
Brick7: 10.70.37.136:/rhs/bricks/d4r12
Brick8: 10.70.37.168:/rhs/bricks/d4r22
Brick9: 10.70.37.98:/rhs/bricks/d5r12
Brick10: 10.70.37.174:/rhs/bricks/d5r22
Brick11: 10.70.37.136:/rhs/bricks/d6r12
Brick12: 10.70.37.168:/rhs/bricks/d6r22
Options Reconfigured:
features.limit-usage: /:5GB
features.quota: on
[root@quota1 ~]# 
root@quota1 ~]# gluster volume status
Status of volume: dist-rep
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.37.98:/rhs/bricks/d1r1			49152	Y	10455
Brick 10.70.37.174:/rhs/bricks/d1r2			49152	Y	9828
Brick 10.70.37.136:/rhs/bricks/d2r1			49152	Y	9772
Brick 10.70.37.168:/rhs/bricks/d2r2			49152	Y	9854
Brick 10.70.37.98:/rhs/bricks/d3r1			49153	Y	10464
Brick 10.70.37.174:/rhs/bricks/d3r2			49153	Y	9837
Brick 10.70.37.136:/rhs/bricks/d4r1			49153	Y	9781
Brick 10.70.37.168:/rhs/bricks/d4r2			49153	Y	9863
Brick 10.70.37.98:/rhs/bricks/d5r1			49154	Y	10473
Brick 10.70.37.174:/rhs/bricks/d5r2			49154	Y	9846
Brick 10.70.37.136:/rhs/bricks/d6r1			49154	Y	9790
Brick 10.70.37.168:/rhs/bricks/d6r2			49154	Y	9872
NFS Server on localhost					N/A	N	N/A
Self-heal Daemon on localhost				N/A	Y	14395
NFS Server on 54fb720d-b816-4e2a-833c-7fbffc5a5363	2049	Y	13217
Self-heal Daemon on 54fb720d-b816-4e2a-833c-7fbffc5a536
3							N/A	Y	13228
NFS Server on 8ba345c1-0723-4c6c-a380-35f2d4c706c7	2049	Y	13115
Self-heal Daemon on 8ba345c1-0723-4c6c-a380-35f2d4c706c
7							N/A	Y	13123
NFS Server on 6f39ec04-96fb-4aa7-b31a-29f11d34dac1	2049	Y	13162
Self-heal Daemon on 6f39ec04-96fb-4aa7-b31a-29f11d34dac
1							N/A	Y	13171
 
There are no active volume tasks
Status of volume: dist-rep2
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.37.98:/rhs/bricks/d1r12			49155	Y	14360
Brick 10.70.37.174:/rhs/bricks/d1r22			49155	Y	13134
Brick 10.70.37.136:/rhs/bricks/d2r12			49155	Y	13087
Brick 10.70.37.168:/rhs/bricks/d2r22			49155	Y	13189
Brick 10.70.37.98:/rhs/bricks/d3r12			49156	Y	14369
Brick 10.70.37.174:/rhs/bricks/d3r22			49156	Y	13143
Brick 10.70.37.136:/rhs/bricks/d4r12			49156	Y	13096
Brick 10.70.37.168:/rhs/bricks/d4r22			49156	Y	13198
Brick 10.70.37.98:/rhs/bricks/d5r12			49157	Y	14378
Brick 10.70.37.174:/rhs/bricks/d5r22			49157	Y	13152
Brick 10.70.37.136:/rhs/bricks/d6r12			49157	Y	13105
Brick 10.70.37.168:/rhs/bricks/d6r22			49157	Y	13207
NFS Server on localhost					N/A	N	N/A
Self-heal Daemon on localhost				N/A	Y	14395
NFS Server on 8ba345c1-0723-4c6c-a380-35f2d4c706c7	2049	Y	13115
Self-heal Daemon on 8ba345c1-0723-4c6c-a380-35f2d4c706c
7							N/A	Y	13123
NFS Server on 54fb720d-b816-4e2a-833c-7fbffc5a5363	2049	Y	13217
Self-heal Daemon on 54fb720d-b816-4e2a-833c-7fbffc5a536
3							N/A	Y	13228
NFS Server on 6f39ec04-96fb-4aa7-b31a-29f11d34dac1	2049	Y	13162
Self-heal Daemon on 6f39ec04-96fb-4aa7-b31a-29f11d34dac
1							N/A	Y	13171
 
There are no active volume tasks
[root@quota1 ~]# ps -eaf | grep nfs
root     14388     1  0 12:17 ?        00:00:37 /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S /var/run/5c5d0edb55e9ad04152dbca9105add6c.socket
root     16300 15596  0 14:58 pts/1    00:00:00 grep nfs


Version-Release number of selected component (if applicable):
[root@quota1 ~]# rpm -qa | grep glusterfs
glusterfs-3.4rhs-1.el6rhs.x86_64
glusterfs-fuse-3.4rhs-1.el6rhs.x86_64
glusterfs-server-3.4rhs-1.el6rhs.x86_64


How reproducible:
tried this test for the first time

Steps to Reproduce:
steps are kind of same as mentioned in the test case

points to be noticed:-

1. mount type for both the volumes is nfs
2. both the volumes are mounted using the same server
3. both the volumes are mounted on different client
4. volume dist-rep is mounted on client c1
5. volume dist-rep2 is mounted on client c2
6. on client c1 after creating the 327 files, no more files are created
the size shown on the client c1 is this,
[root@rhel6 nfs-test-quota]# du -sh
909M    .
[root@rhel6 nfs-test-quota]# du -k
929840  ./dir1
0       ./dir2
0       ./dir3
0       ./dir4
929840  .
[root@rhel6 nfs-test-quota]# 


on client c2 the data is getting creating regularly,
[saurabh@konsoul nfs-test]$ du -sh
1.8G	.
[saurabh@konsoul nfs-test]$ du -k
1785904	./dir1
0	./dir2
0	./dir3
0	./dir4
1785904	.
[saurabh@konsoul nfs-test]$ 

basically the size keeps incrementing for client c2, the client having mount for second volume.

7. also point to be noted is that the "du -h" and quota list do not match

for both the volumes,

[root@quota1 ~]# gluster volume quota dist-rep2 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                            5GB       80%     322.9MB   4.7GB
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                            5GB       80%     664.5MB   4.4GB
[root@quota1 ~]# 


Additional info:

script used to create data is
Comment 6 Pavithra 2013-07-30 06:43:10 EDT
977483 - As per my conversation with Amar, we are not documenting this quota bug in the RC release notes.
Comment 10 Scott Haines 2013-09-23 18:29:52 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.