Bug 1179638 - Dist-geo-rep : replace-brick/remove-brick wont work untill the geo-rep session is deleted.
Summary: Dist-geo-rep : replace-brick/remove-brick wont work untill the geo-rep sessio...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
URL:
Whiteboard:
Depends On: 1002822 1170048 1176824 1186707
Blocks: 1049727
TreeView+ depends on / blocked
 
Reported: 2015-01-07 09:17 UTC by Kotresh HR
Modified: 2018-12-09 19:23 UTC (History)
18 users (show)

Fixed In Version: glusterfs-3.7.0beta1
Clone Of: 1176824
Environment:
Last Closed: 2015-05-14 17:26:21 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kotresh HR 2015-01-07 09:17:45 UTC
+++ This bug was initially created as a clone of Bug #1176824 +++

This issue is same as : Dist-geo-rep : volume won't be able to stop untill the geo-rep session is deleted only difference is now it is happening for replace-brick and may be for remove brick 

Description of problem: Even if the geo-rep session is stopped, replace brick is not allowed
[root@fedora1 glusterfs]# gluster vol replace-brick master fedora1:/bricks/brick0/b0/ fedora1:/bricks/brick2/b2 commit force
volume replace-brick: failed: geo-replication sessions are active for the volume master.
Stop geo-replication sessions involved in this volume. Use 'volume geo-replication status' command for more info.

Even after goe-rep stop, same error msg is getting thrown.

How reproducible:Happens everytime 

Steps to Reproduce:
1.create and start master and slave volume 
2.create and start geo-rep relationship between master and slave.
3.stop the geo-rep session master and slave 
4.try to replace one of the master volume brick.

Actual results:It won't allow to replace a brick in volume, until geo-rep session is deleted 


Expected results: It should allow replace brick in volume, if the geo-rep is stopped. It shouldn't expect to delete the geo-rep session only.

Comment 1 Anand Avati 2015-01-07 09:22:19 UTC
REVIEW: http://review.gluster.org/9402 (glusterd/geo-rep: Allow replace/remove brick if geo-rep is stopped.) posted (#1) for review on master by Kotresh HR (khiremat)

Comment 2 Kotresh HR 2015-01-07 09:27:14 UTC
Documentation changes are expected during remove brick and replace brick if geo-rep is configured.

REMOVE BRICK:
1. Start remove brick
2. Ensure all data in the brick to be removed is synced to slave. Use geo-rep config checkpoint if necessary.
3. Ensure remove brick status and checkpoint status is completed.
4. Stop geo-replication session
5. commit remove brick.

REPLACE BRICK:
Geo-rep needs to stopped before commit force. Rest of the
steps remains intact. This is to make sure there are no
stale gsyncd processes of the removed brick.
Start geo-rep once replace brick is completed.

Comment 3 Anand Avati 2015-01-22 20:08:37 UTC
REVIEW: http://review.gluster.org/9402 (glusterd/geo-rep: Allow replace/remove brick if geo-rep is stopped.) posted (#2) for review on master by Kotresh HR (khiremat)

Comment 4 Anand Avati 2015-02-03 13:10:33 UTC
REVIEW: http://review.gluster.org/9402 (glusterd/geo-rep: Allow replace/remove brick if geo-rep is stopped.) posted (#3) for review on master by Kotresh HR (khiremat)

Comment 5 Anand Avati 2015-02-16 12:22:21 UTC
COMMIT: http://review.gluster.org/9402 committed in master by Krishnan Parthasarathi (kparthas) 
------
commit 8618abaaf07a96c0384db9bd1e7dbbe663f4f24c
Author: Kotresh HR <khiremat>
Date:   Tue Jan 6 20:26:39 2015 +0530

    glusterd/geo-rep: Allow replace/remove brick if geo-rep is stopped.
    
    Replace brick:
    If geo-replication was configured on a volume, replace brick
    used to fail. This patch allows replace brick to go through
    if all geo-rep sessions corresponding to that volume is stopped.
    
    Remove brick:
    There was no check for geo-replication for remove brick. Enforce
    'remove brick commit' to fail if geo-rep session corresponding
    to volume is running. Allow 'remove brick commit' only if all of
    the geo-rep sessions corresponding to that volume is stopped.
    
    Code is re-organized for better readability.
    
    Change-Id: I02282c2764d8b81e319489c977847e6e437511a4
    BUG: 1179638
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: http://review.gluster.org/9402
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Aravinda VK <avishwan>
    Reviewed-by: ajeet jha <ajha>
    Reviewed-by: Avra Sengupta <asengupt>
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Krishnan Parthasarathi <kparthas>

Comment 6 Niels de Vos 2015-05-14 17:26:21 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 7 Niels de Vos 2015-05-14 17:28:17 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 8 Niels de Vos 2015-05-14 17:35:14 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.