Description of problem: ====================== On trying to remove bricks from volume get warning - Running remove-brick with cluster.force-migration enabled can result in data corruption. It is safer to disable this option so that files that receive writes during migration are not migrated. Files that are not migrated can then be manually copied after the remove-brick commit operation. By default cluster.force-migration is disabled on the volume [root@rhs-arch-srv2 ec-11]# gluster v get disperse-vol all cluster.force-migration off Version-Release number of selected component (if applicable): ============================================================ 3.5.0 (glusterfs-6.0-2) How reproducible: ================= 2/2 Steps to Reproduce: ================== 1.On a brick-mux enabled setup create a distributed-dispersed volume 2.Try to remove a subvol [root@dhcp43-44 test1]# gluster v remove-brick disperse-vol 10.70.42.80:/gluster/brick3/ec-13 10.70.43.211:/gluster/brick3/ec-14 10.70.43.116:/gluster/brick3/ec-15 10.70.43.102:/gluster/brick3/ec-16 10.70.35.15:/gluster/brick3/ec-17 10.70.43.44:/gluster/brick3/ec-18 start Running remove-brick with cluster.force-migration enabled can result in data corruption. It is safer to disable this option so that files that receive writes during migration are not migrated. Files that are not migrated can then be manually copied after the remove-brick commit operation. Do you want to continue with your current cluster.force-migration settings? (y/n) 3.Checked volume options [root@dhcp43-44 test1]# gluster v get disperse-vol all |grep migr cluster.lock-migration off cluster.force-migration off [root@dhcp43-44 test1]# Actual results: ================ cluster.force.migration is disabled by default yet remove-brick shows warning that it is enabled. Expected results: ================= As cluster.force.migration is disabled the warning should not be there. Additional info: ================= [root@dhcp43-44 test1]# gluster v status Status of volume: disperse-vol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.42.80:/gluster/brick1/ec-1 49152 0 Y 24646 Brick 10.70.43.211:/gluster/brick1/ec-2 49152 0 Y 1757 Brick 10.70.43.116:/gluster/brick1/ec-3 49152 0 Y 16568 Brick 10.70.43.102:/gluster/brick1/ec-4 49152 0 Y 4522 Brick 10.70.35.15:/gluster/brick1/ec-5 49152 0 Y 26261 Brick 10.70.43.44:/gluster/brick1/ec-6 49152 0 Y 14577 Brick 10.70.42.80:/gluster/brick2/ec-7 49152 0 Y 24646 Brick 10.70.43.211:/gluster/brick2/ec-8 49152 0 Y 1757 Brick 10.70.43.116:/gluster/brick2/ec-9 49152 0 Y 16568 Brick 10.70.43.102:/gluster/brick2/ec-10 49152 0 Y 4522 Brick 10.70.35.15:/gluster/brick2/ec-11 49152 0 Y 26261 Brick 10.70.43.44:/gluster/brick2/ec-12 49152 0 Y 14577 Brick 10.70.42.80:/gluster/brick3/ec-13 49152 0 Y 24646 Brick 10.70.43.211:/gluster/brick3/ec-14 49152 0 Y 1757 Brick 10.70.43.116:/gluster/brick3/ec-15 49152 0 Y 16568 Brick 10.70.43.102:/gluster/brick3/ec-16 49152 0 Y 4522 Brick 10.70.35.15:/gluster/brick3/ec-17 49152 0 Y 26261 Brick 10.70.43.44:/gluster/brick3/ec-18 49152 0 Y 14577 Self-heal Daemon on localhost N/A N/A Y 22707 Self-heal Daemon on dhcp42-80.lab.eng.blr.r edhat.com N/A N/A Y 32316 Self-heal Daemon on 10.70.43.116 N/A N/A Y 24324 Self-heal Daemon on 10.70.43.211 N/A N/A Y 9599 Self-heal Daemon on 10.70.43.102 N/A N/A Y 12352 Self-heal Daemon on 10.70.35.15 N/A N/A Y 1724 Volume Name: disperse-vol Type: Distributed-Disperse Volume ID: dd5779e2-9284-4543-8c2b-346291149c3c Status: Started Snapshot Count: 0 Number of Bricks: 3 x (4 + 2) = 18 Transport-type: tcp Bricks: Brick1: 10.70.42.80:/gluster/brick1/ec-1 Brick2: 10.70.43.211:/gluster/brick1/ec-2 Brick3: 10.70.43.116:/gluster/brick1/ec-3 Brick4: 10.70.43.102:/gluster/brick1/ec-4 Brick5: 10.70.35.15:/gluster/brick1/ec-5 Brick6: 10.70.43.44:/gluster/brick1/ec-6 Brick7: 10.70.42.80:/gluster/brick2/ec-7 Brick8: 10.70.43.211:/gluster/brick2/ec-8 Brick9: 10.70.43.116:/gluster/brick2/ec-9 Brick10: 10.70.43.102:/gluster/brick2/ec-10 Brick11: 10.70.35.15:/gluster/brick2/ec-11 Brick12: 10.70.43.44:/gluster/brick2/ec-12 Brick13: 10.70.42.80:/gluster/brick3/ec-13 Brick14: 10.70.43.211:/gluster/brick3/ec-14 Brick15: 10.70.43.116:/gluster/brick3/ec-15 Brick16: 10.70.43.102:/gluster/brick3/ec-16 Brick17: 10.70.35.15:/gluster/brick3/ec-17 Brick18: 10.70.43.44:/gluster/brick3/ec-18 Options Reconfigured: transport.address-family: inet nfs.disable: on cluster.brick-multiplex: enable [root@dhcp43-211 ~]#
Adding back the keyword. Got removed by mistake
As far as I remember, it is not supposed to be telling the user that the force-migration option is on and it carries a risk. Getting the said setting information in cli is demanding if I understand correctly. Hence, we came up with the text (after consulting doc team) which is sort of neutral and warns the user that there is a risk if and only if the setting is turned on. If we remove the said warning then we would need a way to figure out the value of the setting and then throw the warning if it is on. Maybe rephrasing the text can be considered to convey the right message.
I agree. Modifying the text might be a better option. Fetching option value and then judging based on it seems to be an over kill to me.
REVIEW: https://review.gluster.org/22805
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:3249