Bug 1287447
Summary: | remove watermark ie cluster.tier-mode from vol info after a detach tier is completed successfully | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> |
Component: | tier | Assignee: | Mohammed Rafi KC <rkavunga> |
Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> |
Severity: | low | Docs Contact: | |
Priority: | high | ||
Version: | rhgs-3.1 | CC: | byarlaga, rhs-bugs, rkavunga, sankarshan, storage-qa-internal |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | RHGS 3.1.2 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.7.5-9 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-03-01 06:00:07 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1260783 |
Description
Nag Pavan Chilakam
2015-12-02 06:18:56 UTC
Pls update the fixed_in_version. watermark options are still set on the volume after detach-tier. This is on 3.7.5-9. [root@rhs-client18 ~]# gluster --version glusterfs 3.7.5 built on Dec 3 2015 13:01:46 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@rhs-client18 ~]# [root@rhs-client18 ~]# gluster v info disperse_vol1 Volume Name: disperse_vol1 Type: Distributed-Disperse Volume ID: a6f27d4d-6838-47bf-ba2d-43fe9c980be3 Status: Started Number of Bricks: 2 x (4 + 2) = 12 Transport-type: tcp Bricks: Brick1: transformers:/rhs/brick1/b1 Brick2: interstellar:/rhs/brick1/b2 Brick3: transformers:/rhs/brick2/b3 Brick4: interstellar:/rhs/brick2/b4 Brick5: transformers:/rhs/brick3/b5 Brick6: interstellar:/rhs/brick3/b6 Brick7: transformers:/rhs/brick4/b7 Brick8: interstellar:/rhs/brick4/b8 Brick9: transformers:/rhs/brick5/b9 Brick10: interstellar:/rhs/brick5/b10 Brick11: transformers:/rhs/brick6/b11 Brick12: interstellar:/rhs/brick6/b12 Options Reconfigured: cluster.watermark-hi: 2 <<<<<<<<<<<<<<<<<<<<<<< cluster.watermark-low: 1 <<<<<<<<<<<<<<<<<<<<<<< server.event-threads: 2 client.event-threads: 2 features.quota-deem-statfs: on features.inode-quota: on features.quota: on features.uss: on performance.readdir-ahead: on [root@rhs-client18 ~]# gluster v set disperse_vol1 cluster.watermark-hi 0 volume set: failed: Volume disperse_vol1 is not a tier volume. Option cluster.watermark-hi is only valid for tier volume. [root@rhs-client18 ~]# gluster v set disperse_vol1 cluster.watermark-low 0 volume set: failed: Volume disperse_vol1 is not a tier volume. Option cluster.watermark-low is only valid for tier volume. [root@rhs-client18 ~]# fix works on 3.7.5-9 (tier mode which is set automatically on attach tier , is removed on detach tier commit) [root@zod ~]# gluster v create holy rep 2 zod:/rhs/brick1/holy yarrow:/rhs/brick1/holy volume create: holy: success: please start the volume to access data [root@zod ~]# gluster v start holy gluster v attach-volume start: holy: success [root@zod ~]# gluster v attach-tier holy rep 2 zod:/rhs/brick2/holy_hot yarrow:/rhs/brick2/holy_hot volume attach-tier: success gluster v infTiering Migration Functionality: holy: success: Attach tier is successful on holy. use tier status to check the status. ID: 97d788ee-7e25-4e16-baec-18ee07235508 [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v detach-tier holy Usage: volume detach-tier <VOLNAME> <start|stop|status|commit|force> Tier command failed [root@zod ~]# gluster v detach-tier holy start volume detach-tier start: success ID: 7f4fc514-ae31-49fb-a684-3872194125a0 [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v detach-tier holy status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 completed 0.00 yarrow 0 0Bytes 0 0 0 completed 0.00 [root@zod ~]# gluster v detach-tier holy commit Removing tier can result in data loss. Do you want to Continue? (y/n) y volume detach-tier commit: success Check the detached bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. [root@zod ~]# gluster v detach-tier holy status volume tier detach status: failed: Volume holy is not a distribute volume or contains only 1 brick. Not performing rebalance Tier command failed [root@zod ~]# gluster v info holy Volume Name: holy Type: Replicate Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: zod:/rhs/brick1/holy Brick2: yarrow:/rhs/brick1/holy Options Reconfigured: performance.readdir-ahead: on [root@zod ~]# gluster v create holy rep 2 zod:/rhs/brick1/holy yarrow:/rhs/brick1/holy volume create: holy: success: please start the volume to access data [root@zod ~]# gluster v start holy gluster v attach-volume start: holy: success [root@zod ~]# gluster v attach-tier holy rep 2 zod:/rhs/brick2/holy_hot yarrow:/rhs/brick2/holy_hot volume attach-tier: success gluster v infTiering Migration Functionality: holy: success: Attach tier is successful on holy. use tier status to check the status. ID: 97d788ee-7e25-4e16-baec-18ee07235508 [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v detach-tier holy Usage: volume detach-tier <VOLNAME> <start|stop|status|commit|force> Tier command failed [root@zod ~]# gluster v detach-tier holy start volume detach-tier start: success ID: 7f4fc514-ae31-49fb-a684-3872194125a0 [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v detach-tier holy status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 completed 0.00 yarrow 0 0Bytes 0 0 0 completed 0.00 [root@zod ~]# gluster v detach-tier holy commit Removing tier can result in data loss. Do you want to Continue? (y/n) y volume detach-tier commit: success Check the detached bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. [root@zod ~]# gluster v detach-tier holy status volume tier detach status: failed: Volume holy is not a distribute volume or contains only 1 brick. Not performing rebalance Tier command failed [root@zod ~]# gluster v info holy Volume Name: holy Type: Replicate Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: zod:/rhs/brick1/holy Brick2: yarrow:/rhs/brick1/holy Options Reconfigured: performance.readdir-ahead: on [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v detach-tier holy status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 completed 0.00 yarrow 0 0Bytes 0 0 0 completed 0.00 [root@zod ~]# gluster v detach-tier holy stop Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 completed 0.00 yarrow 0 0Bytes 0 0 0 completed 0.00 'detach-tier' process may be in the middle of a file migration. The process will be fully stopped once the migration of the file is complete. Please check detach-tier process for completion before doing any further brick related tasks on the volume. [root@zod ~]# gluster v detach-tier holy status volume tier detach status: failed: Detach-tier not started Tier command failed [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v stop holy Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y gluster v start holyvolume stop: holy: success [root@zod ~]# gluster v start holy gluster v info holy volume start: holy: success [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v detach-tier holy force Removing tier can result in data loss. Do you want to Continue? (y/n) y volume detach-tier commit force: success [root@zod ~]# gluster v info holy Volume Name: holy Type: Replicate Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: zod:/rhs/brick1/holy Brick2: yarrow:/rhs/brick1/holy Options Reconfigured: performance.readdir-ahead: on [root@zod ~]# gluster v status holy Status of volume: holy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick zod:/rhs/brick1/holy 49155 0 Y 23303 Brick yarrow:/rhs/brick1/holy 49155 0 Y 15804 NFS Server on localhost 2049 0 Y 23412 Self-heal Daemon on localhost N/A N/A Y 23421 NFS Server on yarrow 2049 0 Y 15968 Self-heal Daemon on yarrow N/A N/A Y 15976 Task Status of Volume holy ------------------------------------------------------------------------------ There are no active volume tasks Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0193.html |