Bug 1043301 - quota: quotad goes down and list cmd provides wrong info
Summary: quota: quotad goes down and list cmd provides wrong info
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: quota
Version: pre-release
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Susant Kumar Palai
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-12-15 19:59 UTC by Susant Kumar Palai
Modified: 2015-10-22 15:40 UTC (History)
2 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2015-10-22 15:40:20 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Susant Kumar Palai 2013-12-15 19:59:10 UTC
Description of problem:

while the I/O is happening put down the quotad process,
the data creation crosses the set hard limit
the quota list which is suppose to do it's own mount for collecting info about used space and available space,
has given wrong output

[root@nfs1 ~]# gluster volume info
 
Volume Name: dist-rep
Type: Distributed-Replicate
Volume ID: edc23717-011b-42c6-b984-2922dcaeff5d
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.180:/rhs/bricks/d1r1
Brick2: 10.70.37.80:/rhs/bricks/d1r2
Brick3: 10.70.37.216:/rhs/bricks/d2r1
Brick4: 10.70.37.139:/rhs/bricks/d2r2
Brick5: 10.70.37.180:/rhs/bricks/d3r1
Brick6: 10.70.37.80:/rhs/bricks/d3r2
Brick7: 10.70.37.216:/rhs/bricks/d4r1
Brick8: 10.70.37.139:/rhs/bricks/d4r2
Brick9: 10.70.37.180:/rhs/bricks/d5r1
Brick10: 10.70.37.80:/rhs/bricks/d5r2
Brick11: 10.70.37.216:/rhs/bricks/d6r1
Brick12: 10.70.37.139:/rhs/bricks/d6r2
Options Reconfigured:
features.quota: on

Version-Release number of selected component (if applicable):
[root@nfs1 ~]# rpm -qa | grep glusterfs
glusterfs-3.4.0.12rhs.beta4-1.el6rhs.x86_64
glusterfs-fuse-3.4.0.12rhs.beta4-1.el6rhs.x86_64
glusterfs-server-3.4.0.12rhs.beta4-1.el6rhs.x86_64
[root@nfs1 ~]# 


How reproducible:
tried the test this time

Steps to Reproduce:
1. create volume, start it 
2. enable quota
3. set hard limit of 1GB
4. mount the volume over nfs
5. mkdir mount-point/dir
6. start creating data. Data creation includes 1MB files in a for loop.
7. kill quotad process meanwhile the data creation is happening.


Actual results:

from server,

[root@nfs1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                            1GB       90%      0Bytes   1.0GB
[root@nfs1 ~]# 
[root@nfs1 ~]# 
[root@nfs1 ~]# ps -eaf | grep quotad
root     19023     1 23 07:56 ?        00:00:17 /usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad -p /var/lib/glusterd/quotad/run/quotad.pid -l /var/log/glusterfs/quotad.log -S /var/run/630a2e7527bdbf1aa2d1c11280fa86d7.socket --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off --xlator-option *replicate*.entry-self-heal=off
root     19074 21135  0 07:57 pts/1    00:00:00 grep quotad
[root@nfs1 ~]# kill -9 19023
[root@nfs1 ~]# 
[root@nfs1 ~]# 
[root@nfs1 ~]# #gluster volume quota dist-rep list
[root@nfs1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                            1GB       90%     407.1MB 616.9MB
[root@nfs1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                            1GB       90%     412.1MB 611.9MB




from client,
[root@rhsauto030 dir]# du -h .
1.1G    .
[root@rhsauto030 dir]# pwd
/mnt/nfs-test/dir
[root@rhsauto030 dir]# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg_rhsauto030-lv_root
                      47033288   2311148  42332944   6% /
tmpfs                   961260         0    961260   0% /dev/shm
/dev/vda1               495844     37546    432698   8% /boot
/dev/vdb1             51605908   3603124  45381356   8% /export
10.70.34.114:/opt     51606528   6228992  42756096  13% /opt
10.70.37.180:/dist-rep
                     3141390336   3528128 3137862208   1% /mnt/nfs-test
[root@rhsauto030 dir]# 


Expected results:
List should provide the correct info

Additional info:

Comment 1 Kaleb KEITHLEY 2015-10-22 15:40:20 UTC
pre-release version is ambiguous and about to be removed as a choice.

If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.


Note You need to log in before you can comment on or make changes to this bug.