Bug 978820 - quota: keeps changing the values of different fields in list command
quota: keeps changing the values of different fields in list command
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.1
x86_64 Linux
high Severity high
: ---
: ---
Assigned To: vpshastry
Saurabh
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-06-27 03:47 EDT by Saurabh
Modified: 2016-01-19 01:12 EST (History)
5 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.12rhs.beta1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-23 18:39:53 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2013-06-27 03:47:31 EDT
Description of problem:
the problem over here was found while the BZ 978778
issue is that the quotad is down 
the gluster volume quota <vol-name> list
keeps displaying the different values for used and available fields for every invocation

Version-Release number of selected component (if applicable):
[root@quota1 ~]# rpm -qa | grep glusterfs
glusterfs-3.4rhs-1.el6rhs.x86_64
glusterfs-fuse-3.4rhs-1.el6rhs.x86_64
glusterfs-server-3.4rhs-1.el6rhs.x86_64

How reproducible:
kind of keeps happening


Actual results:
[root@quota1 ~]# ps -eaf | grep quotad
root     20607  2140  0 06:15 pts/0    00:00:00 grep quotad
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      60.3GB  39.7GB
/dir                                         5GB       80%       5.8GB  0Bytes
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      60.3GB  39.7GB
/dir                                         5GB       80%       5.3GB  0Bytes
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      66.4GB  33.6GB
/dir                                         5GB       80%       5.8GB  0Bytes
[root@quota1 ~]# 
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      66.4GB  33.6GB
/dir                                         5GB       80%       3.1GB   1.9GB
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      60.3GB  39.7GB
/dir                                         5GB       80%       5.8GB  0Bytes
[root@quota1 ~]# ps -eaf | grep quotad
root     20745  2140  0 06:22 pts/0    00:00:00 grep quotad

actually even after bringing the quotad back, the values keep changing
=======================================================================
[root@quota1 ~]# ps -eaf | grep quotad
root     20818     1  2 06:28 ?        00:00:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad -p /var/lib/glusterd/quotad/run/quotad.pid -l /var/log/glusterfs/quotad.log -S /var/run/f83c88ea4e77077c8a1a1daeab2325ee.socket --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off --xlator-option *replicate*.entry-self-heal=off
root     20911  2140  0 06:28 pts/0    00:00:00 grep quotad
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      60.3GB  39.7GB
/dir                                         5GB       80%       6.5GB  0Bytes
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      66.4GB  33.6GB
/dir                                         5GB       80%       6.5GB  0Bytes
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      63.3GB  36.7GB
/dir                                         5GB       80%       6.5GB  0Bytes


Expected results:
the list data should be consistent, when the I/O is not happening.

Additional info:
Comment 5 vpshastry 2013-07-05 02:09:46 EDT
Its fixed in latest glusterfs-3.4.0.12rhs.beta1.
Comment 7 Scott Haines 2013-09-23 18:39:53 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html
Comment 8 Scott Haines 2013-09-23 18:43:49 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.