When we detach a tier successfully, the watermark setting is still seen on a regular volume, which has no meaning. Remove the cluster.tier-mode as part of detach tier commit clean up glusterfs-server-3.7.5-7.el7rhgs.x86_64 [root@zod ~]# gluster v info fsync Volume Name: fsync Type: dust-rep Volume ID: 862b28d6-329e-4ad4-8e32-0dd5e62a2670 Status: Started Cold Tier Type : Distribute_replicate Number of Bricks: 4 Brick5: zod:/rhs/brick1/fsync Brick6: yarrow:/rhs/brick1/fsync Brick7: zod:/rhs/brick2/fsync Brick8: yarrow:/rhs/brick2/fsync cluster.tier-mode: cache
Pls update the fixed_in_version.
watermark options are still set on the volume after detach-tier. This is on 3.7.5-9. [root@rhs-client18 ~]# gluster --version glusterfs 3.7.5 built on Dec 3 2015 13:01:46 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@rhs-client18 ~]# [root@rhs-client18 ~]# gluster v info disperse_vol1 Volume Name: disperse_vol1 Type: Distributed-Disperse Volume ID: a6f27d4d-6838-47bf-ba2d-43fe9c980be3 Status: Started Number of Bricks: 2 x (4 + 2) = 12 Transport-type: tcp Bricks: Brick1: transformers:/rhs/brick1/b1 Brick2: interstellar:/rhs/brick1/b2 Brick3: transformers:/rhs/brick2/b3 Brick4: interstellar:/rhs/brick2/b4 Brick5: transformers:/rhs/brick3/b5 Brick6: interstellar:/rhs/brick3/b6 Brick7: transformers:/rhs/brick4/b7 Brick8: interstellar:/rhs/brick4/b8 Brick9: transformers:/rhs/brick5/b9 Brick10: interstellar:/rhs/brick5/b10 Brick11: transformers:/rhs/brick6/b11 Brick12: interstellar:/rhs/brick6/b12 Options Reconfigured: cluster.watermark-hi: 2 <<<<<<<<<<<<<<<<<<<<<<< cluster.watermark-low: 1 <<<<<<<<<<<<<<<<<<<<<<< server.event-threads: 2 client.event-threads: 2 features.quota-deem-statfs: on features.inode-quota: on features.quota: on features.uss: on performance.readdir-ahead: on [root@rhs-client18 ~]# gluster v set disperse_vol1 cluster.watermark-hi 0 volume set: failed: Volume disperse_vol1 is not a tier volume. Option cluster.watermark-hi is only valid for tier volume. [root@rhs-client18 ~]# gluster v set disperse_vol1 cluster.watermark-low 0 volume set: failed: Volume disperse_vol1 is not a tier volume. Option cluster.watermark-low is only valid for tier volume. [root@rhs-client18 ~]#
fix works on 3.7.5-9 (tier mode which is set automatically on attach tier , is removed on detach tier commit) [root@zod ~]# gluster v create holy rep 2 zod:/rhs/brick1/holy yarrow:/rhs/brick1/holy volume create: holy: success: please start the volume to access data [root@zod ~]# gluster v start holy gluster v attach-volume start: holy: success [root@zod ~]# gluster v attach-tier holy rep 2 zod:/rhs/brick2/holy_hot yarrow:/rhs/brick2/holy_hot volume attach-tier: success gluster v infTiering Migration Functionality: holy: success: Attach tier is successful on holy. use tier status to check the status. ID: 97d788ee-7e25-4e16-baec-18ee07235508 [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v detach-tier holy Usage: volume detach-tier <VOLNAME> <start|stop|status|commit|force> Tier command failed [root@zod ~]# gluster v detach-tier holy start volume detach-tier start: success ID: 7f4fc514-ae31-49fb-a684-3872194125a0 [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v detach-tier holy status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 completed 0.00 yarrow 0 0Bytes 0 0 0 completed 0.00 [root@zod ~]# gluster v detach-tier holy commit Removing tier can result in data loss. Do you want to Continue? (y/n) y volume detach-tier commit: success Check the detached bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. [root@zod ~]# gluster v detach-tier holy status volume tier detach status: failed: Volume holy is not a distribute volume or contains only 1 brick. Not performing rebalance Tier command failed [root@zod ~]# gluster v info holy Volume Name: holy Type: Replicate Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: zod:/rhs/brick1/holy Brick2: yarrow:/rhs/brick1/holy Options Reconfigured: performance.readdir-ahead: on [root@zod ~]# gluster v create holy rep 2 zod:/rhs/brick1/holy yarrow:/rhs/brick1/holy volume create: holy: success: please start the volume to access data [root@zod ~]# gluster v start holy gluster v attach-volume start: holy: success [root@zod ~]# gluster v attach-tier holy rep 2 zod:/rhs/brick2/holy_hot yarrow:/rhs/brick2/holy_hot volume attach-tier: success gluster v infTiering Migration Functionality: holy: success: Attach tier is successful on holy. use tier status to check the status. ID: 97d788ee-7e25-4e16-baec-18ee07235508 [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v detach-tier holy Usage: volume detach-tier <VOLNAME> <start|stop|status|commit|force> Tier command failed [root@zod ~]# gluster v detach-tier holy start volume detach-tier start: success ID: 7f4fc514-ae31-49fb-a684-3872194125a0 [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v detach-tier holy status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 completed 0.00 yarrow 0 0Bytes 0 0 0 completed 0.00 [root@zod ~]# gluster v detach-tier holy commit Removing tier can result in data loss. Do you want to Continue? (y/n) y volume detach-tier commit: success Check the detached bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. [root@zod ~]# gluster v detach-tier holy status volume tier detach status: failed: Volume holy is not a distribute volume or contains only 1 brick. Not performing rebalance Tier command failed [root@zod ~]# gluster v info holy Volume Name: holy Type: Replicate Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: zod:/rhs/brick1/holy Brick2: yarrow:/rhs/brick1/holy Options Reconfigured: performance.readdir-ahead: on [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v detach-tier holy status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 completed 0.00 yarrow 0 0Bytes 0 0 0 completed 0.00 [root@zod ~]# gluster v detach-tier holy stop Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 completed 0.00 yarrow 0 0Bytes 0 0 0 completed 0.00 'detach-tier' process may be in the middle of a file migration. The process will be fully stopped once the migration of the file is complete. Please check detach-tier process for completion before doing any further brick related tasks on the volume. [root@zod ~]# gluster v detach-tier holy status volume tier detach status: failed: Detach-tier not started Tier command failed [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v stop holy Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y gluster v start holyvolume stop: holy: success [root@zod ~]# gluster v start holy gluster v info holy volume start: holy: success [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v info holy Volume Name: holy Type: Tier Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: yarrow:/rhs/brick2/holy_hot Brick2: zod:/rhs/brick2/holy_hot Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: zod:/rhs/brick1/holy Brick4: yarrow:/rhs/brick1/holy Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on performance.readdir-ahead: on [root@zod ~]# gluster v detach-tier holy force Removing tier can result in data loss. Do you want to Continue? (y/n) y volume detach-tier commit force: success [root@zod ~]# gluster v info holy Volume Name: holy Type: Replicate Volume ID: 44644ecc-1ae7-4e64-91c6-93bc54cc3992 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: zod:/rhs/brick1/holy Brick2: yarrow:/rhs/brick1/holy Options Reconfigured: performance.readdir-ahead: on [root@zod ~]# gluster v status holy Status of volume: holy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick zod:/rhs/brick1/holy 49155 0 Y 23303 Brick yarrow:/rhs/brick1/holy 49155 0 Y 15804 NFS Server on localhost 2049 0 Y 23412 Self-heal Daemon on localhost N/A N/A Y 23421 NFS Server on yarrow 2049 0 Y 15968 Self-heal Daemon on yarrow N/A N/A Y 15976 Task Status of Volume holy ------------------------------------------------------------------------------ There are no active volume tasks
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0193.html