Bug 1418227
Summary: | Quota: After deleting directory from mount point on which quota was configured, quota list command output is blank | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Anil Shah <ashah> | |
Component: | quota | Assignee: | Sanoj Unnikrishnan <sunnikri> | |
Status: | CLOSED ERRATA | QA Contact: | Anil Shah <ashah> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.2 | CC: | amukherj, asrivast, bmohanra, ccalhoun, rhinduja, rhs-bugs, storage-qa-internal | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.3.0 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.8.4-19 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1418259 (view as bug list) | Environment: | ||
Last Closed: | 2017-09-21 04:30:55 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1418259, 1549482 | |||
Bug Blocks: | 1417147 |
Description
Anil Shah
2017-02-01 10:44:28 UTC
The issue is hit only if the last gfid in quota.conf happens to be stale (due to rmdir). The code to print the list is nested under rsp.dict.dict_len check in cli_quotad_getlimit_cbk. If the last gfid happened to be stale the dict_len would be zero an we would not reach the print code. We need to move the print stmt. outside the check to solve this issue. if (rsp.dict.dict_len) { dict = dict_new (); ret = dict_unserialize (rsp.dict.dict_val, rsp.dict.dict_len, &dict); ... ret = dict_get_int32 (local->dict, "max_count", &max_count); ... node = list_node_add_order (dict, &local->dict_list, cli_quota_compare_path); ... if (list_count == max_count) { list_for_each_entry_safe (node, tmpnode, &local->dict_list, list) { dict = node->ptr; print_quota_list_from_quotad (frame, dict); list_node_del (node); dict_unref (dict); } } } upstream patch : https://review.gluster.org/#/c/16507/ downstream patch : https://code.engineering.redhat.com/gerrit/#/c/101307 [root@rhs-arch-srv1 yum.repos.d]# gluster v quota vol1 list Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? ------------------------------------------------------------------------------------------------------------------------------- / 2.0GB 80%(1.6GB) 1.8GB 190.0MB Yes No /test 1.0GB 80%(819.2MB) 1.0GB 0Bytes Yes Yes Operations on Client: [root@dhcp47-13 fuse]# ll total 838149 drwxr-xr-x. 2 root root 4096 May 2 14:25 test -rw-r--r--. 1 root root 858259484 May 2 13:59 testfile [root@dhcp47-13 fuse]# rm -rf test [root@rhs-arch-srv1 yum.repos.d]# gluster v quota vol1 list Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? ------------------------------------------------------------------------------------------------------------------------------- / 2.0GB 80%(1.6GB) 818.5MB 1.2GB No No bug verified on build glusterfs-3.8.4-24.el7rhgs.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 *** Bug 1575154 has been marked as a duplicate of this bug. *** |