Bug 1287997
| Summary: | tiering: quota list command is not working after attach or detach | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Anil Shah <ashah> | |
| Component: | quota | Assignee: | Vijaikumar Mallikarjuna <vmallika> | |
| Status: | CLOSED ERRATA | QA Contact: | Anil Shah <ashah> | |
| Severity: | urgent | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | rhgs-3.1 | CC: | byarlaga, nchilaka, rcyriac, rhs-bugs, sankarshan, smohan, storage-qa-internal | |
| Target Milestone: | --- | Keywords: | ZStream | |
| Target Release: | RHGS 3.1.2 | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-3.7.5-11 | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1288474 (view as bug list) | Environment: | ||
| Last Closed: | 2016-03-01 06:00:31 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1260783, 1288474, 1288484 | |||
|
Description
Anil Shah
2015-12-03 09:29:13 UTC
Version-Release number of selected component (if applicable): [root@rhs001 ~]# rpm -qa | grep glusterfs glusterfs-debuginfo-3.7.5-7.el7rhgs.x86_64 glusterfs-3.7.5-8.el7rhgs.x86_64 glusterfs-api-3.7.5-8.el7rhgs.x86_64 glusterfs-cli-3.7.5-8.el7rhgs.x86_64 glusterfs-geo-replication-3.7.5-8.el7rhgs.x86_64 glusterfs-libs-3.7.5-8.el7rhgs.x86_64 glusterfs-client-xlators-3.7.5-8.el7rhgs.x86_64 glusterfs-fuse-3.7.5-8.el7rhgs.x86_64 glusterfs-server-3.7.5-8.el7rhgs.x86_64 Patch submitted upstream: http://review.gluster.org/#/c/12881/ upstream patch: http://review.gluster.org/#/c/12881/ release-3.7 patch: http://review.gluster.org/#/c/12882/ downstream patch: https://code.engineering.redhat.com/gerrit/63285 [root@rhs001 lib]# gluster v info
Volume Name: vol0
Type: Distributed-Replicate
Volume ID: b0a1562f-0d57-4d85-a481-0f3f3e4eefcd
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.47.143:/rhs/brick1/b1
Brick2: 10.70.47.145:/rhs/brick1/b2
Brick3: 10.70.47.143:/rhs/brick2/b3
Brick4: 10.70.47.145:/rhs/brick2/b4
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
[root@rhs001 lib]# gluster v quota vol0 list
Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/ 10.0GB 80%(8.0GB) 0Bytes 10.0GB No No
[root@rhs001 lib]# gluster v attach-tier vol0 replica 2 10.70.47.143:/rhs/brick5/b01 10.70.47.145:/rhs/brick5/b02 10.70.47.2:/rhs/brick5/b03 10.70.47.3:/rhs/brick5/b04
volume attach-tier: success
Tiering Migration Functionality: vol0: success: Attach tier is successful on vol0. use tier status to check the status.
ID: de464b75-6887-4613-bc4f-a9dace84a89c
[root@rhs001 lib]# gluster v info vol0
Volume Name: vol0
Type: Tier
Volume ID: b0a1562f-0d57-4d85-a481-0f3f3e4eefcd
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: 10.70.47.3:/rhs/brick5/b04
Brick2: 10.70.47.2:/rhs/brick5/b03
Brick3: 10.70.47.145:/rhs/brick5/b02
Brick4: 10.70.47.143:/rhs/brick5/b01
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: 10.70.47.143:/rhs/brick1/b1
Brick6: 10.70.47.145:/rhs/brick1/b2
Brick7: 10.70.47.143:/rhs/brick2/b3
Brick8: 10.70.47.145:/rhs/brick2/b4
Options Reconfigured:
cluster.tier-mode: cache
features.ctr-enabled: on
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
[root@rhs001 lib]# gluster v quota vol0 list
Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/ 10.0GB 80%(8.0GB) 0Bytes 10.0GB No No
[root@rhs001 lib]# gluster v detach-tier vol0 start
volume detach-tier start: success
ID: 8107fe24-042a-423f-8343-aad5edd9e1eb
[root@rhs001 lib]# gluster v detach-tier vol0 status
Node Rebalanced-files size scanned failures skipped status run time in secs
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 0 0Bytes 0 0 0 completed 0.00
10.70.47.145 0 0Bytes 0 0 0 completed 0.00
10.70.47.2 0 0Bytes 0 0 0 completed 0.00
10.70.47.3 0 0Bytes 0 0 0 completed 0.00
[root@rhs001 lib]# gluster v detach-tier vol0 commit
Removing tier can result in data loss. Do you want to Continue? (y/n) y
volume detach-tier commit: success
Check the detached bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.
[root@rhs001 lib]# gluster v quota vol0 list
Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/ 10.0GB 80%(8.0GB) 0Bytes 10.0GB No No
[root@rhs001 lib]# gluster v quota vol0 limit-usage /test/test1 10GB
volume quota : success
[root@rhs001 lib]# gluster v quota vol0 list
Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/ 10.0GB 80%(8.0GB) 0Bytes 10.0GB No No
/test/test1 10.0GB 80%(8.0GB) 0Bytes 10.0GB No No
Bug verified on build glusterfs-3.7.5-11.el7rhgs.x86_64
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0193.html |