Description of problem: More description can be found at https://github.com/gluster/glusterfs/issues/308. This bug is created to give a workaround until the feature (mentioned in github#308) is not complete. We have a couple of patches already merged in this regard, can be found below https://review.gluster.org/#/c/19207/ https://review.gluster.org/#/c/19202/ The above patches restrict migration of a file if there is any open fd on them. Also we introudced a new option: cluster.force-migraiton which can overwrite these restrictions and migrate even if the file is open. We need another patch where the user needs to give a confirmation to carry on with remove-brick process with the current setting of force-migraiton option. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/19625 (cli/glusterd: Add warning message in cli for user to check) posted (#9) for review on master by Susant Palai
COMMIT: https://review.gluster.org/19625 committed in master by "Atin Mukherjee" <amukherj> with a commit message- cli/glusterd: Add warning message in cli for user to check force-migration config for remove-brick operation. The cli will take input from the user before starting "remove-brick" start operation. The message/confirmation looks like the following: <Running remove-brick with cluster.force-migration enabled can result in data corruption. It is safer to disable this option so that files that receive writes during migration are not migrated. Files that are not migrated can then be manually copied after the remove-brick commit operation. Do you want to continue with your current cluster.force-migration settings? (y/n)> And also question for COMMIT_FORCE is changed. Fixes: bz#1572586 Change-Id: Ifdb6b108a646f50339dd196d6e65962864635139 Signed-off-by: Susant Palai <spalai>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report. glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html [2] https://www.gluster.org/pipermail/gluster-users/