Description of problem: the problem over here was found while the BZ 978778 issue is that the quotad is down the gluster volume quota <vol-name> list keeps displaying the different values for used and available fields for every invocation Version-Release number of selected component (if applicable): [root@quota1 ~]# rpm -qa | grep glusterfs glusterfs-3.4rhs-1.el6rhs.x86_64 glusterfs-fuse-3.4rhs-1.el6rhs.x86_64 glusterfs-server-3.4rhs-1.el6rhs.x86_64 How reproducible: kind of keeps happening Actual results: [root@quota1 ~]# ps -eaf | grep quotad root 20607 2140 0 06:15 pts/0 00:00:00 grep quotad [root@quota1 ~]# gluster volume quota dist-rep list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 100GB 80% 60.3GB 39.7GB /dir 5GB 80% 5.8GB 0Bytes [root@quota1 ~]# gluster volume quota dist-rep list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 100GB 80% 60.3GB 39.7GB /dir 5GB 80% 5.3GB 0Bytes [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# gluster volume quota dist-rep list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 100GB 80% 66.4GB 33.6GB /dir 5GB 80% 5.8GB 0Bytes [root@quota1 ~]# [root@quota1 ~]# gluster volume quota dist-rep list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 100GB 80% 66.4GB 33.6GB /dir 5GB 80% 3.1GB 1.9GB [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# gluster volume quota dist-rep list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 100GB 80% 60.3GB 39.7GB /dir 5GB 80% 5.8GB 0Bytes [root@quota1 ~]# ps -eaf | grep quotad root 20745 2140 0 06:22 pts/0 00:00:00 grep quotad actually even after bringing the quotad back, the values keep changing ======================================================================= [root@quota1 ~]# ps -eaf | grep quotad root 20818 1 2 06:28 ? 00:00:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad -p /var/lib/glusterd/quotad/run/quotad.pid -l /var/log/glusterfs/quotad.log -S /var/run/f83c88ea4e77077c8a1a1daeab2325ee.socket --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off --xlator-option *replicate*.entry-self-heal=off root 20911 2140 0 06:28 pts/0 00:00:00 grep quotad [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# gluster volume quota dist-rep list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 100GB 80% 60.3GB 39.7GB /dir 5GB 80% 6.5GB 0Bytes [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# gluster volume quota dist-rep list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 100GB 80% 66.4GB 33.6GB /dir 5GB 80% 6.5GB 0Bytes [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# gluster volume quota dist-rep list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 100GB 80% 63.3GB 36.7GB /dir 5GB 80% 6.5GB 0Bytes Expected results: the list data should be consistent, when the I/O is not happening. Additional info:
Its fixed in latest glusterfs-3.4.0.12rhs.beta1.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html