Bug 1418227

Summary: Quota: After deleting directory from mount point on which quota was configured, quota list command output is blank
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Anil Shah <ashah>
Component: quotaAssignee: Sanoj Unnikrishnan <sunnikri>
Status: CLOSED ERRATA QA Contact: Anil Shah <ashah>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.2CC: amukherj, asrivast, bmohanra, ccalhoun, rhinduja, rhs-bugs, storage-qa-internal
Target Milestone: ---   
Target Release: RHGS 3.3.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.8.4-19 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1418259 (view as bug list) Environment:
Last Closed: 2017-09-21 04:30:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1418259, 1549482    
Bug Blocks: 1417147    

Description Anil Shah 2017-02-01 10:44:28 UTC
Description of problem:

When quota is configured on directory, deleting that directory from mount point,
quota list doesn't list any output.

Version-Release number of selected component (if applicable):

glusterfs-server-3.8.4-13.el7rhgs.x86_64

How reproducible:

100%

Steps to Reproduce:
1. Create distribute-replicate volume
2. Do fuse mount and create some file and directories 
3. enable quota and set limit-usage on directories 
4. Delete directory from mount on which quota was configure 
5. Run quota list command

Actual results:

quota list command output is blank
=================================
[root@dhcp46-88 yum.repos.d]# gluster v quota vol0 list
[root@dhcp46-88 yum.repos.d]# 


Expected results:

quota list command should data usage 


Additional info:

Comment 2 Sanoj Unnikrishnan 2017-02-01 11:54:13 UTC
The issue is hit only if the last gfid in quota.conf happens to be stale (due to rmdir).

The code to print the list is nested under rsp.dict.dict_len check in cli_quotad_getlimit_cbk. If the last gfid happened to be stale the dict_len
would be zero an we would not reach the print code. 
We need to move the print stmt. outside the check to solve this issue.

      if (rsp.dict.dict_len) {
               dict  = dict_new ();
                ret = dict_unserialize (rsp.dict.dict_val,
                                        rsp.dict.dict_len,
                                        &dict);
                ...  
                ret = dict_get_int32 (local->dict, "max_count",
                                      &max_count);
                ...
                node = list_node_add_order (dict, &local->dict_list,
                                            cli_quota_compare_path);
                ...

                if (list_count == max_count) {
                        list_for_each_entry_safe (node, tmpnode,
                                                  &local->dict_list, list) {
                                dict = node->ptr;
                                print_quota_list_from_quotad (frame, dict);
                                list_node_del (node);
                                dict_unref (dict);
                        }
                }
     }

Comment 3 Atin Mukherjee 2017-02-01 14:37:48 UTC
upstream patch : https://review.gluster.org/#/c/16507/

Comment 8 Atin Mukherjee 2017-03-24 09:29:31 UTC
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/101307

Comment 10 Anil Shah 2017-05-02 09:02:44 UTC
[root@rhs-arch-srv1 yum.repos.d]# gluster v quota vol1 list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                          2.0GB     80%(1.6GB)    1.8GB 190.0MB             Yes                   No
/test                                      1.0GB     80%(819.2MB)    1.0GB  0Bytes             Yes                  Yes

Operations on Client:
[root@dhcp47-13 fuse]# ll
total 838149
drwxr-xr-x. 2 root root      4096 May  2 14:25 test
-rw-r--r--. 1 root root 858259484 May  2 13:59 testfile
[root@dhcp47-13 fuse]# rm -rf test

[root@rhs-arch-srv1 yum.repos.d]# gluster v quota vol1 list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                          2.0GB     80%(1.6GB)  818.5MB   1.2GB              No                   No


bug verified on build glusterfs-3.8.4-24.el7rhgs.x86_64

Comment 12 errata-xmlrpc 2017-09-21 04:30:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774

Comment 13 errata-xmlrpc 2017-09-21 04:56:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774

Comment 14 Sanoj Unnikrishnan 2018-05-08 08:39:09 UTC
*** Bug 1575154 has been marked as a duplicate of this bug. ***