| Summary: | quota build: reset command fails | |||
|---|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Saurabh <saujain> | |
| Component: | glusterd | Assignee: | Anuradha <atalur> | |
| Status: | CLOSED ERRATA | QA Contact: | Saurabh <saujain> | |
| Severity: | high | Docs Contact: | ||
| Priority: | high | |||
| Version: | 2.1 | CC: | atalur, kparthas, mzywusko, rgowdapp, rhs-bugs, smohan, vagarwal, vbellur | |
| Target Milestone: | --- | Keywords: | ZStream | |
| Target Release: | --- | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-3.4.0.37rhs-1 | Doc Type: | Bug Fix | |
| Doc Text: |
Previously, volume reset command failed when there were protected options set on the volume. Quota is a protected field and the reset command would fail when quota is enabled on the volume. Now, with this update, volume reset command doesn't fail without "force" option even when the protected fields are set. A message is displayed indicating that "force" should be used to reset the protected fields.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1022905 (view as bug list) | Environment: | ||
| Last Closed: | 2013-11-27 15:32:34 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Bug Depends On: | ||||
| Bug Blocks: | 1022905 | |||
|
Description
Saurabh
2013-08-26 05:03:35 UTC
Patch posted for review on branch rhs-2.1-u1: https://code.engineering.redhat.com/gerrit/#/c/14325/ Patch posted for review on branch rhs-2.1: https://code.engineering.redhat.com/gerrit/#/c/14326/ verified on glusterfs-3.4.0.37rhs
test done is like this,
Test1
step 1. enable quota and set limit on root of the volume and directory underneath.
And quota-deem-statfs on
step 2. now try to set the option diagnostics.client-log-level to DEBUG level
volume info,
[root@quota4 ~]# gluster volume info dist-rep5
Volume Name: dist-rep5
Type: Distributed-Replicate
Volume ID: 48279380-8783-4cc6-812b-916db4a56fe7
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.186:/rhs/brick1/d1r15
Brick2: 10.70.43.181:/rhs/brick1/d1r25
Brick3: 10.70.43.18:/rhs/brick1/d2r15
Brick4: 10.70.43.22:/rhs/brick1/d2r25
Brick5: 10.70.42.186:/rhs/brick1/d3r15
Brick6: 10.70.43.181:/rhs/brick1/d3r25
Brick7: 10.70.43.18:/rhs/brick1/d4r15
Brick8: 10.70.43.22:/rhs/brick1/d4r25
Brick9: 10.70.42.186:/rhs/brick1/d5r15
Brick10: 10.70.43.181:/rhs/brick1/d5r25
Brick11: 10.70.43.18:/rhs/brick1/d6r15
Brick12: 10.70.43.22:/rhs/brick1/d6r25
Options Reconfigured:
diagnostics.client-log-level: DEBUG
features.quota-deem-statfs: on
features.quota: on
step 3. execute reset command
result:-
[root@quota1 ~]# gluster volume reset dist-rep5
volume reset: success: All unprotected fields were reset. To reset the protected fields, use 'force'.
vol info,
[root@quota4 ~]# gluster volume info dist-rep5
Volume Name: dist-rep5
Type: Distributed-Replicate
Volume ID: 48279380-8783-4cc6-812b-916db4a56fe7
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.186:/rhs/brick1/d1r15
Brick2: 10.70.43.181:/rhs/brick1/d1r25
Brick3: 10.70.43.18:/rhs/brick1/d2r15
Brick4: 10.70.43.22:/rhs/brick1/d2r25
Brick5: 10.70.42.186:/rhs/brick1/d3r15
Brick6: 10.70.43.181:/rhs/brick1/d3r25
Brick7: 10.70.43.18:/rhs/brick1/d4r15
Brick8: 10.70.43.22:/rhs/brick1/d4r25
Brick9: 10.70.42.186:/rhs/brick1/d5r15
Brick10: 10.70.43.181:/rhs/brick1/d5r25
Brick11: 10.70.43.18:/rhs/brick1/d6r15
Brick12: 10.70.43.22:/rhs/brick1/d6r25
Options Reconfigured:
features.quota: on
step 4.
[root@quota1 ~]# gluster volume quota dist-rep5 list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/dir 1000.0PB 80% 0Bytes 1000.0PB
/ 7000.0PB 80% 0Bytes 7000.0PB
So, effectively quota limits does not get affected by reset command.
Except since quota-deem-statfs is also reset as expected because of the execution of the volume reset command.
Test2:-
Similar test as above but with some data inside the directory,
volume info and quota status,
[root@quota1 ~]# gluster volume info dist-rep5
Volume Name: dist-rep5
Type: Distributed-Replicate
Volume ID: 48279380-8783-4cc6-812b-916db4a56fe7
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.186:/rhs/brick1/d1r15
Brick2: 10.70.43.181:/rhs/brick1/d1r25
Brick3: 10.70.43.18:/rhs/brick1/d2r15
Brick4: 10.70.43.22:/rhs/brick1/d2r25
Brick5: 10.70.42.186:/rhs/brick1/d3r15
Brick6: 10.70.43.181:/rhs/brick1/d3r25
Brick7: 10.70.43.18:/rhs/brick1/d4r15
Brick8: 10.70.43.22:/rhs/brick1/d4r25
Brick9: 10.70.42.186:/rhs/brick1/d5r15
Brick10: 10.70.43.181:/rhs/brick1/d5r25
Brick11: 10.70.43.18:/rhs/brick1/d6r15
Brick12: 10.70.43.22:/rhs/brick1/d6r25
Options Reconfigured:
features.quota-deem-statfs: on
diagnostics.client-log-level: DEBUG
features.quota: on
[root@quota1 ~]#
[root@quota1 ~]# gluster volume quota dist-rep5 list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/dir 5.0GB 80% 5.0GB 0Bytes
/ 10.0GB 80% 5.0GB 5.0GB
execute reset command on any node of the cluster,
status after reset on all 4 nodes is same as mentioned below,
[root@quota3 ~]# gluster volume info dist-rep5
Volume Name: dist-rep5
Type: Distributed-Replicate
Volume ID: 48279380-8783-4cc6-812b-916db4a56fe7
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.186:/rhs/brick1/d1r15
Brick2: 10.70.43.181:/rhs/brick1/d1r25
Brick3: 10.70.43.18:/rhs/brick1/d2r15
Brick4: 10.70.43.22:/rhs/brick1/d2r25
Brick5: 10.70.42.186:/rhs/brick1/d3r15
Brick6: 10.70.43.181:/rhs/brick1/d3r25
Brick7: 10.70.43.18:/rhs/brick1/d4r15
Brick8: 10.70.43.22:/rhs/brick1/d4r25
Brick9: 10.70.42.186:/rhs/brick1/d5r15
Brick10: 10.70.43.181:/rhs/brick1/d5r25
Brick11: 10.70.43.18:/rhs/brick1/d6r15
Brick12: 10.70.43.22:/rhs/brick1/d6r25
Options Reconfigured:
features.quota: on
quota list command status on of the node,
[root@quota4 ~]# gluster volume quota dist-rep5 list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/dir 5.0GB 80% 5.0GB 0Bytes
/ 10.0GB 80% 5.0GB 5.0GB
create more data inside the mountpoint,
using the script [root@konsoul dir1]# for i in `seq 1 100`; do time dd if=/dev/input_file of=f.$i bs=102400 count=1024; done
after quota status command,
[root@quota1 ~]# gluster volume quota dist-rep5 list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/dir 5.0GB 80% 5.0GB 0Bytes
/ 10.0GB 80% 10.1GB 0Bytes
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1769.html |