Bug 1418259 - Quota: After deleting directory from mount point on which quota was configured, quota list command output is blank
Summary: Quota: After deleting directory from mount point on which quota was configur...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: quota
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Sanoj Unnikrishnan
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1418227 1549482
TreeView+ depends on / blocked
 
Reported: 2017-02-01 12:00 UTC by Sanoj Unnikrishnan
Modified: 2018-02-27 08:52 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.11.0
Clone Of: 1418227
: 1549482 (view as bug list)
Environment:
Last Closed: 2017-05-30 18:40:54 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
Work around to fix stale gfid entry (615 bytes, text/plain)
2017-08-07 07:16 UTC, Sanoj Unnikrishnan
no flags Details

Description Sanoj Unnikrishnan 2017-02-01 12:00:48 UTC
Description of problem:

When quota is configured on directory, deleting that directory from mount point,
quota list doesn't list any output.

How reproducible:

100%

Steps to Reproduce:
1. Create distribute-replicate volume
2. Do fuse mount and create some file and directories 
3. enable quota and set limit-usage on directories 
4. Delete the last directory on which limit was configured.
5. Run quota list command

Actual results:

quota list command output is blank
=================================
[root@dhcp46-88 yum.repos.d]# gluster v quota vol0 list
[root@dhcp46-88 yum.repos.d]# 


Expected results:

quota list command should data usage 


The issue is hit only if the last gfid in quota.conf happens to be stale (due to rmdir).

The code to print the list is nested under rsp.dict.dict_len check in cli_quotad_getlimit_cbk. If the last gfid happened to be stale the dict_len
would be zero an we would not reach the print code. 
We need to move the print stmt. outside the check to solve this issue.

      if (rsp.dict.dict_len) {
               dict  = dict_new ();
                ret = dict_unserialize (rsp.dict.dict_val,
                                        rsp.dict.dict_len,
                                        &dict);
                ...  
                ret = dict_get_int32 (local->dict, "max_count",
                                      &max_count);
                ...
                node = list_node_add_order (dict, &local->dict_list,
                                            cli_quota_compare_path);
                ...

                if (list_count == max_count) {
                        list_for_each_entry_safe (node, tmpnode,
                                                  &local->dict_list, list) {
                                dict = node->ptr;
                                print_quota_list_from_quotad (frame, dict);
                                list_node_del (node);
                                dict_unref (dict);
                        }
                }
     }

Comment 1 Worker Ant 2017-02-01 13:56:24 UTC
REVIEW: https://review.gluster.org/16507 (Fixes quota list when stale gfid exist in quota.conf) posted (#1) for review on master by sanoj-unnikrishnan (sunnikri)

Comment 2 Worker Ant 2017-02-06 11:36:28 UTC
COMMIT: https://review.gluster.org/16507 committed in master by Raghavendra G (rgowdapp) 
------
commit a3a38bb9cd2c17bd955489ae87800f398ef10239
Author: Sanoj Unnikrishnan <sunnikri>
Date:   Wed Feb 1 19:15:29 2017 +0530

    Fixes quota list when stale gfid exist in quota.conf
    
    when an rmdir is done, the gfid corresponding to the dir remains
    in quota.conf (if a limit was configured on the dir). The quota
    list should ignore them and print the remaining limits. In case
    the last gfid in quota.conf happened to be stale, the print code
    was getting skipped. Refactored the code to ensure printing happens.
    
    Change-Id: I3ac8e8a7a62d34e1fa8fd2734419459112c71797
    BUG: 1418259
    Signed-off-by: Sanoj Unnikrishnan <sunnikri>
    Reviewed-on: https://review.gluster.org/16507
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Manikandan Selvaganesh <manikandancs333>
    Reviewed-by: Raghavendra G <rgowdapp>

Comment 4 Shyamsundar 2017-05-30 18:40:54 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report.

glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 5 Richard 2017-06-07 14:08:24 UTC
FYI this is still an issue in 3.11.0.

Also, in that release if you go over your quota you can't write, but if you then increase your quota, you still can't write. odd.

Comment 6 Richard 2017-06-07 14:12:32 UTC
It seems you have to remove the quota for that directory, and re-add a bigger quota value to enable writes.

Comment 7 Richard 2017-06-08 14:27:28 UTC
ah, if you have 'cluster.nufa' enabled then quotas don't work nicely.

1. setup new volume and enable cluster.nufa=on.
2. set a quota for a folder.
3. fill that folder and see the error you get (not a normal quota error).
4. increase the quota allocated to that folder
5. write to the folder again, and still get the (not normal) error.

However, if you don't enable NUFA initally on the volume this all works as expected.

Comment 8 Sanoj Unnikrishnan 2017-08-07 07:16:35 UTC
Created attachment 1309930 [details]
Work around to fix stale gfid entry

This is a work around script for users on versions prior to 3.11 
The script removes the last stale entry. Just in case, the quota list still does not display, the same script can be run multiple times


Note You need to log in before you can comment on or make changes to this bug.