Bug 1287997 - tiering: quota list command is not working after attach or detach
Summary: tiering: quota list command is not working after attach or detach
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: quota
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.1.2
Assignee: Vijaikumar Mallikarjuna
QA Contact: Anil Shah
URL:
Whiteboard:
Depends On:
Blocks: 1260783 1288474 1288484
TreeView+ depends on / blocked
 
Reported: 2015-12-03 09:29 UTC by Anil Shah
Modified: 2016-09-17 12:37 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.7.5-11
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1288474 (view as bug list)
Environment:
Last Closed: 2016-03-01 06:00:31 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description Anil Shah 2015-12-03 09:29:13 UTC
Description of problem:



Version-Release number of selected component (if applicable):


How reproducible:

100%

Steps to Reproduce:

Scenario 1
===================================
1. Create 2*2 distribute replicate volume
2. Attach 2*2 distribute replicate volume tier
3. Enable quota and set limit-usage
4. Run quota-list command
5. Now detach tier and commit
6. Run quota-list command

Scenario 2 
=======================================

1. Create 2*2 distribute replicate volume
2. Enable quota and set limit-usage
3. Attach 2*2 distribute replicate volume tier
4. Run quota-list command
5. detach tier and commit
6. run quota-list command

Actual results:

Quota-list command doesn't display quota hard and soft limit 

Expected results:

quota-list command should work and should display soft and hard limit 

Additional info:

Comment 2 Anil Shah 2015-12-03 09:31:57 UTC
Version-Release number of selected component (if applicable):

[root@rhs001 ~]# rpm -qa | grep glusterfs
glusterfs-debuginfo-3.7.5-7.el7rhgs.x86_64
glusterfs-3.7.5-8.el7rhgs.x86_64
glusterfs-api-3.7.5-8.el7rhgs.x86_64
glusterfs-cli-3.7.5-8.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-8.el7rhgs.x86_64
glusterfs-libs-3.7.5-8.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-8.el7rhgs.x86_64
glusterfs-fuse-3.7.5-8.el7rhgs.x86_64
glusterfs-server-3.7.5-8.el7rhgs.x86_64

Comment 4 Vijaikumar Mallikarjuna 2015-12-04 11:30:41 UTC
Patch submitted upstream: http://review.gluster.org/#/c/12881/

Comment 5 Vijaikumar Mallikarjuna 2015-12-08 12:50:29 UTC
upstream patch: http://review.gluster.org/#/c/12881/
release-3.7 patch: http://review.gluster.org/#/c/12882/
downstream patch: https://code.engineering.redhat.com/gerrit/63285

Comment 6 Anil Shah 2015-12-14 09:46:15 UTC
[root@rhs001 lib]# gluster v info
 
Volume Name: vol0
Type: Distributed-Replicate
Volume ID: b0a1562f-0d57-4d85-a481-0f3f3e4eefcd
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.47.143:/rhs/brick1/b1
Brick2: 10.70.47.145:/rhs/brick1/b2
Brick3: 10.70.47.143:/rhs/brick2/b3
Brick4: 10.70.47.145:/rhs/brick2/b4
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
 
[root@rhs001 lib]# gluster v quota vol0 list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                         10.0GB     80%(8.0GB)   0Bytes  10.0GB              No                   No
[root@rhs001 lib]# gluster v attach-tier vol0  replica 2 10.70.47.143:/rhs/brick5/b01 10.70.47.145:/rhs/brick5/b02 10.70.47.2:/rhs/brick5/b03 10.70.47.3:/rhs/brick5/b04
volume attach-tier: success
Tiering Migration Functionality: vol0: success: Attach tier is successful on vol0. use tier status to check the status.
ID: de464b75-6887-4613-bc4f-a9dace84a89c

[root@rhs001 lib]# gluster v info vol0
 
Volume Name: vol0
Type: Tier
Volume ID: b0a1562f-0d57-4d85-a481-0f3f3e4eefcd
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: 10.70.47.3:/rhs/brick5/b04
Brick2: 10.70.47.2:/rhs/brick5/b03
Brick3: 10.70.47.145:/rhs/brick5/b02
Brick4: 10.70.47.143:/rhs/brick5/b01
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: 10.70.47.143:/rhs/brick1/b1
Brick6: 10.70.47.145:/rhs/brick1/b2
Brick7: 10.70.47.143:/rhs/brick2/b3
Brick8: 10.70.47.145:/rhs/brick2/b4
Options Reconfigured:
cluster.tier-mode: cache
features.ctr-enabled: on
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
[root@rhs001 lib]# gluster v quota vol0 list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                         10.0GB     80%(8.0GB)   0Bytes  10.0GB              No                   No


[root@rhs001 lib]# gluster v detach-tier vol0 start
volume detach-tier start: success
ID: 8107fe24-042a-423f-8343-aad5edd9e1eb
[root@rhs001 lib]# gluster v detach-tier vol0 status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes             0             0             0            completed               0.00
                            10.70.47.145                0        0Bytes             0             0             0            completed               0.00
                              10.70.47.2                0        0Bytes             0             0             0            completed               0.00
                              10.70.47.3                0        0Bytes             0             0             0            completed               0.00
[root@rhs001 lib]# gluster v detach-tier vol0 commit
Removing tier can result in data loss. Do you want to Continue? (y/n) y
volume detach-tier commit: success
Check the detached bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. 
[root@rhs001 lib]# gluster v quota vol0 list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                         10.0GB     80%(8.0GB)   0Bytes  10.0GB              No                   No

[root@rhs001 lib]# gluster v quota vol0 limit-usage /test/test1 10GB
volume quota : success
[root@rhs001 lib]# gluster v quota vol0 list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                         10.0GB     80%(8.0GB)   0Bytes  10.0GB              No                   No
/test/test1                               10.0GB     80%(8.0GB)   0Bytes  10.0GB              No                   No


Bug verified on build glusterfs-3.7.5-11.el7rhgs.x86_64

Comment 8 errata-xmlrpc 2016-03-01 06:00:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.