Bug 1550936 - Pause/Resume of geo-replication with wrong user specified returns success
Summary: Pause/Resume of geo-replication with wrong user specified returns success
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: Unspecified
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Sunny Kumar
QA Contact:
URL:
Whiteboard:
Depends On: 1547446 1557297
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-02 10:18 UTC by Sunny Kumar
Modified: 2018-06-20 18:01 UTC (History)
3 users (show)

Fixed In Version: glusterfs-v4.1.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1547446
Environment:
Last Closed: 2018-06-20 18:01:56 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Sunny Kumar 2018-03-02 10:18:40 UTC
+++ This bug was initially created as a clone of Bug #1547446 +++

Description of problem:
Geo-replication setup with other user then root on slave node,
as described in documentation:
http://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/
If you run pause or resume on geo-replication with wrong user,
other user then you setup, you will get pause/resume success even
if it does not work.
To make it work again you need to restart glusterd on all nodes.

Version-Release number of selected component (if applicable):
glusterfs 3.12.5 (all packages)

How reproducible:
If geo-replication is setup with user geoaccount, run:
gluster volume geo-replication [volume_name] anotheruser@[main_slave_node]::[volume_name] pause
Returns:
Pausing geo-replication session between [volume_name] & anotheruser@main_slave_node]::[volume_name] has been successful
If you create a snapshot you will get the error that you need to stop geo-replication first.
So geo-replication does not pause

Steps to Reproduce:
1. gluster volume geo-replication [volume_name] anotheruser@[main_slave_node]::[volume_name] pause
2. gluster snapshot create [snap_name] [volume_name]

Actual results:
Geo-replication returns success with exit code 0 even if wrong user or no user (root) is specified at pause/resume when geo-replication user is setup.


Expected results:
Return code != 0 and an error message saying that wrong user is specified

Additional info:
Have not tested with root user for geo-replication and specifying another user at pause/resume

--- Additional comment from Worker Ant on 2018-03-02 04:29:25 EST ---

REVIEW: https://review.gluster.org/19658 (georep : Pause/Resume of geo-replication with wrong user) posted (#1) for review on master by Sunny Kumar

Comment 1 Worker Ant 2018-03-02 10:21:59 UTC
REVIEW: https://review.gluster.org/19658 (georep : Pause/Resume of geo-replication with wrong user) posted (#2) for review on master by Sunny Kumar

Comment 2 Worker Ant 2018-03-20 07:27:43 UTC
COMMIT: https://review.gluster.org/19658 committed in master by "Atin Mukherjee" <amukherj> with a commit message- georep : Pause/Resume of geo-replication with wrong user

While performing pause/resume on geo-replication with wrong user
(other user then you setup), always returns success. Which further
leads to snapshot creation failure as it is detecting active
geo-replication session.

Change-Id: I6e96e8dd3e861348b057475387f0093cb903ae88
BUG: 1550936
Signed-off-by: Sunny Kumar <sunkumar>

Comment 3 Shyamsundar 2018-06-20 18:01:56 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.