Bug 1157974 - Warning message to restore data from removed bricks, should not be thrown when 'remove-brick force' was used
Summary: Warning message to restore data from removed bricks, should not be thrown whe...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: cli
Version: pre-release
Hardware: x86_64
OS: All
unspecified
high
Target Milestone: ---
Assignee: Susant Kumar Palai
QA Contact:
URL:
Whiteboard:
Depends On: 1142087
Blocks: 1087818
TreeView+ depends on / blocked
 
Reported: 2014-10-28 07:03 UTC by Susant Kumar Palai
Modified: 2015-05-14 17:44 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.7.0
Clone Of: 1142087
Environment:
Last Closed: 2015-05-14 17:28:07 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Susant Kumar Palai 2014-10-28 07:03:27 UTC
+++ This bug was initially created as a clone of Bug #1142087 +++

Description of problem:
-----------------------
'remove-brick force' was used to forcefully remove the brick from the volume, knowing that there would be data lost in that case. There is no data migration involved with 'remove-brick force'

But as of now, with the latest build, there is a warning thrown, suggesting the user to copy the data from the removed brick to the gluster mount, if the files are not migrated.

This arises contradiction.

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
glusterfs-3.6.0.28-1.el6rhs

How reproducible:
------------------
Always

Steps to Reproduce:
-------------------
1. Create a distributed volume
2. Try removing a brick forcefully

Actual results:
---------------
Warning message is thrown to copy back the data from the removed-brick to gluster mount ( if there are any data available on removed brick )

[root@rhss5 ~]# gluster volume remove-brick repvol NODE1:/rhs/brick1/b1 NODE2:/rhs/brick1/b1 force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. 

Expected results:
-----------------
The warning message should not be thrown, as there is no data migration is involved with 'remove-brick force'

Additional Info
----------------
The intent of 'remove-brick  force' is to remove the brick forcefully ignoring the data in it. We confirm the same before executing 'remove-brick force' saying - "Removing brick(s) can result in data loss. Do you want to Continue? (y/n)"

Again after executing 'remove-brick force' again warning message is thrown as - 
"Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.", which would be contradicting.

--- Additional comment from SATHEESARAN on 2014-09-16 12:15:19 MVT ---

Example :

[root@rhss5 ~]# gluster volume remove-brick repvol NODE1:/rhs/brick1/b1 NODE2:/rhs/brick1/b1 force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.

--- Additional comment from Nagaprasad Sathyanarayana on 2014-09-17 10:23:32 MVT ---

As discussed and agreed by engineering leads, this is to be targeted for the 3.0.2 release.

--- Additional comment from Shalaka on 2014-09-18 10:52:37 MVT ---

Please add doc text for this known issue.

--- Additional comment from Nithya Balachandran on 2014-09-22 10:22:34 MVT ---

I would say we can probably skip documenting this as known issue as the only thing we can say is that the user can ignore the message in this scenario.

--- Additional comment from Shalaka on 2014-09-26 10:54:14 MVT ---

Canceling need_info as Nithya reviewed and signed-off the doc text.

Comment 1 Anand Avati 2014-10-28 07:06:56 UTC
REVIEW: http://review.gluster.org/8983 (CLI: Show warning message only for remove-brick commit) posted (#1) for review on master by susant palai (spalai)

Comment 2 Anand Avati 2014-10-29 12:46:15 UTC
COMMIT: http://review.gluster.org/8983 committed in master by Kaushal M (kaushal) 
------
commit c6e6b43b169b8452ee26121ce1ad0b0f07b512cf
Author: Susant Palai <spalai>
Date:   Tue Oct 28 02:53:39 2014 -0400

    CLI: Show warning message only for remove-brick commit
    
    Earlier warning message for checking the removed-bricks for
    any unmigrated files was thrown both for remove-brick commit
    and force.
    
    With this change the warning message will be shown only for
    remove-brick commit.
    
    Change-Id: Ib1fa47d831d5189088c77c5cf709a25ff36d7379
    BUG: 1157974
    Signed-off-by: Susant Palai <spalai>
    Reviewed-on: http://review.gluster.org/8983
    Reviewed-by: Atin Mukherjee <amukherj>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Shyamsundar Ranganathan <srangana>
    Reviewed-by: Kaushal M <kaushal>

Comment 3 Niels de Vos 2015-05-14 17:28:07 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 4 Niels de Vos 2015-05-14 17:35:40 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Niels de Vos 2015-05-14 17:38:02 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:44:20 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.