Bug 1000903 - quota build: reset command fails
Summary: quota build: reset command fails
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: Anuradha
QA Contact: Saurabh
URL:
Whiteboard:
Depends On:
Blocks: 1022905
TreeView+ depends on / blocked
 
Reported: 2013-08-26 05:03 UTC by Saurabh
Modified: 2016-09-20 02:00 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.4.0.37rhs-1
Doc Type: Bug Fix
Doc Text:
Previously, volume reset command failed when there were protected options set on the volume. Quota is a protected field and the reset command would fail when quota is enabled on the volume. Now, with this update, volume reset command doesn't fail without "force" option even when the protected fields are set. A message is displayed indicating that "force" should be used to reset the protected fields.
Clone Of:
: 1022905 (view as bug list)
Environment:
Last Closed: 2013-11-27 15:32:34 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1769 0 normal SHIPPED_LIVE Red Hat Storage 2.1 enhancement and bug fix update #1 2013-11-27 20:17:39 UTC

Description Saurabh 2013-08-26 05:03:35 UTC
Description of problem:
I tried to use the reset command on the latest build
It failed as per the response,
but effectively in subsequent gluster volume info, the options are reset.
So, altogether presently it seems more of a response issue.


Version-Release number of selected component (if applicable):
glusterfs-rdma-3.4.0.20rhsquota2-1.el6rhs.x86_64
glusterfs-3.4.0.20rhsquota1-1.el6.x86_64
glusterfs-server-3.4.0.20rhsquota1-1.el6.x86_64
glusterfs-fuse-3.4.0.20rhsquota1-1.el6.x86_64

How reproducible:
always

Steps to Reproduce:
1. create a volume of 6x2 type, start it
2. use gluster volume set

[root@rhsauto032 ~]# gluster volume set dist-rep3 diagnostics.client-log-level DEBUG
volume set: success
[root@rhsauto032 ~]# gluster volume info dist-rep3
 
Volume Name: dist-rep3
Type: Distributed-Replicate
Volume ID: 6aaeda5c-b6f6-42c2-8003-b4035f62085b
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: rhsauto032.lab.eng.blr.redhat.com:/rhs/bricks/d1r1-3
Brick2: rhsauto033.lab.eng.blr.redhat.com:/rhs/bricks/d1r2-3
Brick3: rhsauto034.lab.eng.blr.redhat.com:/rhs/bricks/d2r1-3
Brick4: rhsauto035.lab.eng.blr.redhat.com:/rhs/bricks/d2r2-3
Brick5: rhsauto032.lab.eng.blr.redhat.com:/rhs/bricks/d3r1-3
Brick6: rhsauto033.lab.eng.blr.redhat.com:/rhs/bricks/d3r2-3
Brick7: rhsauto034.lab.eng.blr.redhat.com:/rhs/bricks/d4r1-3
Brick8: rhsauto035.lab.eng.blr.redhat.com:/rhs/bricks/d4r2-3
Brick9: rhsauto032.lab.eng.blr.redhat.com:/rhs/bricks/d5r1-3
Brick10: rhsauto033.lab.eng.blr.redhat.com:/rhs/bricks/d5r2-3
Brick11: rhsauto034.lab.eng.blr.redhat.com:/rhs/bricks/d6r1-3
Brick12: rhsauto035.lab.eng.blr.redhat.com:/rhs/bricks/d6r2-3
Options Reconfigured:
diagnostics.client-log-level: DEBUG


3. use gluster volume reset

Actual results:
[root@rhsauto032 ~]# gluster volume info dist-rep3
 
Volume Name: dist-rep3
Type: Distributed-Replicate
Volume ID: 6aaeda5c-b6f6-42c2-8003-b4035f62085b
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: rhsauto032.lab.eng.blr.redhat.com:/rhs/bricks/d1r1-3
Brick2: rhsauto033.lab.eng.blr.redhat.com:/rhs/bricks/d1r2-3
Brick3: rhsauto034.lab.eng.blr.redhat.com:/rhs/bricks/d2r1-3
Brick4: rhsauto035.lab.eng.blr.redhat.com:/rhs/bricks/d2r2-3
Brick5: rhsauto032.lab.eng.blr.redhat.com:/rhs/bricks/d3r1-3
Brick6: rhsauto033.lab.eng.blr.redhat.com:/rhs/bricks/d3r2-3
Brick7: rhsauto034.lab.eng.blr.redhat.com:/rhs/bricks/d4r1-3
Brick8: rhsauto035.lab.eng.blr.redhat.com:/rhs/bricks/d4r2-3
Brick9: rhsauto032.lab.eng.blr.redhat.com:/rhs/bricks/d5r1-3
Brick10: rhsauto033.lab.eng.blr.redhat.com:/rhs/bricks/d5r2-3
Brick11: rhsauto034.lab.eng.blr.redhat.com:/rhs/bricks/d6r1-3
Brick12: rhsauto035.lab.eng.blr.redhat.com:/rhs/bricks/d6r2-3
Options Reconfigured:
features.quota: off

Expected results:
the response should be according to the execution that it doing.

Additional info:

.cmd_log_history says,

[2013-08-26 02:13:15.329388]  : volume set dist-rep3 diagnostics.client-log-level DEBUG : SUCCESS
[2013-08-26 02:13:31.214887]  : volume reset dist-rep3 : FAILED : 'all' is protected. To reset use 'force'.

Comment 3 Anuradha 2013-10-21 11:31:48 UTC
Patch posted for review on branch rhs-2.1-u1:
https://code.engineering.redhat.com/gerrit/#/c/14325/

Comment 4 Anuradha 2013-10-21 11:56:01 UTC
Patch posted for review on branch rhs-2.1:
https://code.engineering.redhat.com/gerrit/#/c/14326/

Comment 5 Saurabh 2013-10-31 12:50:52 UTC
verified on glusterfs-3.4.0.37rhs
test done is like this,

Test1
step 1. enable quota and set limit on root of the volume and directory underneath.
   And quota-deem-statfs on
step 2. now try to set the option diagnostics.client-log-level to DEBUG level

volume info,

[root@quota4 ~]# gluster volume info dist-rep5
 
Volume Name: dist-rep5
Type: Distributed-Replicate
Volume ID: 48279380-8783-4cc6-812b-916db4a56fe7
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.186:/rhs/brick1/d1r15
Brick2: 10.70.43.181:/rhs/brick1/d1r25
Brick3: 10.70.43.18:/rhs/brick1/d2r15
Brick4: 10.70.43.22:/rhs/brick1/d2r25
Brick5: 10.70.42.186:/rhs/brick1/d3r15
Brick6: 10.70.43.181:/rhs/brick1/d3r25
Brick7: 10.70.43.18:/rhs/brick1/d4r15
Brick8: 10.70.43.22:/rhs/brick1/d4r25
Brick9: 10.70.42.186:/rhs/brick1/d5r15
Brick10: 10.70.43.181:/rhs/brick1/d5r25
Brick11: 10.70.43.18:/rhs/brick1/d6r15
Brick12: 10.70.43.22:/rhs/brick1/d6r25
Options Reconfigured:
diagnostics.client-log-level: DEBUG
features.quota-deem-statfs: on
features.quota: on


step 3. execute reset command

result:-
[root@quota1 ~]# gluster volume reset dist-rep5 
volume reset: success: All unprotected fields were reset. To reset the protected fields, use 'force'.

vol info,
[root@quota4 ~]# gluster volume info dist-rep5
 
Volume Name: dist-rep5
Type: Distributed-Replicate
Volume ID: 48279380-8783-4cc6-812b-916db4a56fe7
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.186:/rhs/brick1/d1r15
Brick2: 10.70.43.181:/rhs/brick1/d1r25
Brick3: 10.70.43.18:/rhs/brick1/d2r15
Brick4: 10.70.43.22:/rhs/brick1/d2r25
Brick5: 10.70.42.186:/rhs/brick1/d3r15
Brick6: 10.70.43.181:/rhs/brick1/d3r25
Brick7: 10.70.43.18:/rhs/brick1/d4r15
Brick8: 10.70.43.22:/rhs/brick1/d4r25
Brick9: 10.70.42.186:/rhs/brick1/d5r15
Brick10: 10.70.43.181:/rhs/brick1/d5r25
Brick11: 10.70.43.18:/rhs/brick1/d6r15
Brick12: 10.70.43.22:/rhs/brick1/d6r25
Options Reconfigured:
features.quota: on

step 4. 
[root@quota1 ~]# gluster volume quota dist-rep5 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/dir                                     1000.0PB       80%      0Bytes 1000.0PB
/                                        7000.0PB       80%      0Bytes 7000.0PB

So, effectively quota limits does not get affected by reset command.

Except since quota-deem-statfs is also reset as expected because of the execution of the volume reset command.


Test2:-
Similar test as above but with some data inside the directory,


volume info and quota status,

[root@quota1 ~]# gluster volume info dist-rep5
 
Volume Name: dist-rep5
Type: Distributed-Replicate
Volume ID: 48279380-8783-4cc6-812b-916db4a56fe7
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.186:/rhs/brick1/d1r15
Brick2: 10.70.43.181:/rhs/brick1/d1r25
Brick3: 10.70.43.18:/rhs/brick1/d2r15
Brick4: 10.70.43.22:/rhs/brick1/d2r25
Brick5: 10.70.42.186:/rhs/brick1/d3r15
Brick6: 10.70.43.181:/rhs/brick1/d3r25
Brick7: 10.70.43.18:/rhs/brick1/d4r15
Brick8: 10.70.43.22:/rhs/brick1/d4r25
Brick9: 10.70.42.186:/rhs/brick1/d5r15
Brick10: 10.70.43.181:/rhs/brick1/d5r25
Brick11: 10.70.43.18:/rhs/brick1/d6r15
Brick12: 10.70.43.22:/rhs/brick1/d6r25
Options Reconfigured:
features.quota-deem-statfs: on
diagnostics.client-log-level: DEBUG
features.quota: on
[root@quota1 ~]# 
[root@quota1 ~]# gluster volume quota dist-rep5 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/dir                                       5.0GB       80%       5.0GB  0Bytes
/                                         10.0GB       80%       5.0GB   5.0GB


execute reset command on any node of the cluster,

status after reset on all 4 nodes is same as mentioned below,
[root@quota3 ~]# gluster volume info dist-rep5
 
Volume Name: dist-rep5
Type: Distributed-Replicate
Volume ID: 48279380-8783-4cc6-812b-916db4a56fe7
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.186:/rhs/brick1/d1r15
Brick2: 10.70.43.181:/rhs/brick1/d1r25
Brick3: 10.70.43.18:/rhs/brick1/d2r15
Brick4: 10.70.43.22:/rhs/brick1/d2r25
Brick5: 10.70.42.186:/rhs/brick1/d3r15
Brick6: 10.70.43.181:/rhs/brick1/d3r25
Brick7: 10.70.43.18:/rhs/brick1/d4r15
Brick8: 10.70.43.22:/rhs/brick1/d4r25
Brick9: 10.70.42.186:/rhs/brick1/d5r15
Brick10: 10.70.43.181:/rhs/brick1/d5r25
Brick11: 10.70.43.18:/rhs/brick1/d6r15
Brick12: 10.70.43.22:/rhs/brick1/d6r25
Options Reconfigured:
features.quota: on


quota list command status on of the node,
[root@quota4 ~]# gluster volume quota dist-rep5 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/dir                                       5.0GB       80%       5.0GB  0Bytes
/                                         10.0GB       80%       5.0GB   5.0GB

create more data inside the mountpoint,
using the script [root@konsoul dir1]# for i in `seq 1 100`; do time dd if=/dev/input_file of=f.$i bs=102400 count=1024; done

after quota status command,
[root@quota1 ~]# gluster volume quota dist-rep5 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/dir                                       5.0GB       80%       5.0GB  0Bytes
/                                         10.0GB       80%      10.1GB  0Bytes

Comment 6 errata-xmlrpc 2013-11-27 15:32:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1769.html


Note You need to log in before you can comment on or make changes to this bug.