Bug 978820 - quota: keeps changing the values of different fields in list command
Summary: quota: keeps changing the values of different fields in list command
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: vpshastry
QA Contact: Saurabh
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-06-27 07:47 UTC by Saurabh
Modified: 2016-01-19 06:12 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.4.0.12rhs.beta1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-09-23 22:39:53 UTC
Embargoed:


Attachments (Terms of Use)

Description Saurabh 2013-06-27 07:47:31 UTC
Description of problem:
the problem over here was found while the BZ 978778
issue is that the quotad is down 
the gluster volume quota <vol-name> list
keeps displaying the different values for used and available fields for every invocation

Version-Release number of selected component (if applicable):
[root@quota1 ~]# rpm -qa | grep glusterfs
glusterfs-3.4rhs-1.el6rhs.x86_64
glusterfs-fuse-3.4rhs-1.el6rhs.x86_64
glusterfs-server-3.4rhs-1.el6rhs.x86_64

How reproducible:
kind of keeps happening


Actual results:
[root@quota1 ~]# ps -eaf | grep quotad
root     20607  2140  0 06:15 pts/0    00:00:00 grep quotad
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      60.3GB  39.7GB
/dir                                         5GB       80%       5.8GB  0Bytes
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      60.3GB  39.7GB
/dir                                         5GB       80%       5.3GB  0Bytes
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      66.4GB  33.6GB
/dir                                         5GB       80%       5.8GB  0Bytes
[root@quota1 ~]# 
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      66.4GB  33.6GB
/dir                                         5GB       80%       3.1GB   1.9GB
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      60.3GB  39.7GB
/dir                                         5GB       80%       5.8GB  0Bytes
[root@quota1 ~]# ps -eaf | grep quotad
root     20745  2140  0 06:22 pts/0    00:00:00 grep quotad

actually even after bringing the quotad back, the values keep changing
=======================================================================
[root@quota1 ~]# ps -eaf | grep quotad
root     20818     1  2 06:28 ?        00:00:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad -p /var/lib/glusterd/quotad/run/quotad.pid -l /var/log/glusterfs/quotad.log -S /var/run/f83c88ea4e77077c8a1a1daeab2325ee.socket --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off --xlator-option *replicate*.entry-self-heal=off
root     20911  2140  0 06:28 pts/0    00:00:00 grep quotad
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      60.3GB  39.7GB
/dir                                         5GB       80%       6.5GB  0Bytes
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      66.4GB  33.6GB
/dir                                         5GB       80%       6.5GB  0Bytes
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          100GB       80%      63.3GB  36.7GB
/dir                                         5GB       80%       6.5GB  0Bytes


Expected results:
the list data should be consistent, when the I/O is not happening.

Additional info:

Comment 5 vpshastry 2013-07-05 06:09:46 UTC
Its fixed in latest glusterfs-3.4.0.12rhs.beta1.

Comment 7 Scott Haines 2013-09-23 22:39:53 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Comment 8 Scott Haines 2013-09-23 22:43:49 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html


Note You need to log in before you can comment on or make changes to this bug.