Bug 1395626

Summary: Labelled geo-rep checkpoints hide geo-replication status
Product: [Community] GlusterFS Reporter: Aravinda VK <avishwan>
Component: geo-replicationAssignee: Aravinda VK <avishwan>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.9CC: bugs, cblum, csaba, rhinduja, rhs-bugs, storage-qa-internal
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.9.1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1388401 Environment:
Last Closed: 2017-03-08 10:20:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1380122, 1388401    
Bug Blocks:    

Description Aravinda VK 2016-11-16 10:12:06 UTC
+++ This bug was initially created as a clone of Bug #1388401 +++

+++ This bug was initially created as a clone of Bug #1380122 +++
Description of problem:
When assigning a label to a checkpoint, geo-replication status will throw:
No active geo-replication sessions between [masternode] and [geo-rep target]

Steps to Reproduce:
1. Set up a geo-replication session
2. Once started, create a checkpoint like this:
# gluster volume geo-replication rep01 RHGS3::slave config checkpoint chris
3. Run # gluster volume geo-replication rep01 RHGS3::slave status

Actual results:
No active geo-replication sessions between rep01 and RHGS3::slave

Expected results:
MASTER NODE    MASTER VOL    MASTER BRICK         SLAVE USER    SLAVE           SLAVE NODE    STATUS     CRAWL STATUS       LAST_SYNCED
----------------------------------------------------------------------------------------------------------------------------------------------
RHGS1          rep01         /rhs/brick1/rep01    root          RHGS3::slave    RHGS3         Active     Changelog Crawl    2016-09-28 13:13:45
RHGS2          rep01         /rhs/brick1/rep01    root          RHGS3::slave    RHGS4         Passive    N/A                N/A

Additional info:

It seems like the geo-replication continues even though status says there is no active connection :(

[root@RHGS1 rep01]# gluster volume geo-replication rep01 RHGS3::slave status

MASTER NODE    MASTER VOL    MASTER BRICK         SLAVE USER    SLAVE           SLAVE NODE    STATUS     CRAWL STATUS       LAST_SYNCED
----------------------------------------------------------------------------------------------------------------------------------------------
RHGS1          rep01         /rhs/brick1/rep01    root          RHGS3::slave    RHGS3         Active     Changelog Crawl    2016-09-28 13:13:45
RHGS2          rep01         /rhs/brick1/rep01    root          RHGS3::slave    RHGS4         Passive    N/A                N/A
[root@RHGS1 rep01]# gluster volume geo-replication rep01 RHGS3::slave config checkpoint chris
geo-replication config updated successfully
[root@RHGS1 rep01]# gluster volume geo-replication rep01 RHGS3::slave status
No active geo-replication sessions between rep01 and RHGS3::slave
[root@RHGS1 rep01]# gluster volume geo-replication rep01 RHGS3::slave config checkpoint now
geo-replication config updated successfully
[root@RHGS1 rep01]# gluster volume geo-replication rep01 RHGS3::slave status

MASTER NODE    MASTER VOL    MASTER BRICK         SLAVE USER    SLAVE           SLAVE NODE    STATUS     CRAWL STATUS       LAST_SYNCED
----------------------------------------------------------------------------------------------------------------------------------------------
RHGS1          rep01         /rhs/brick1/rep01    root          RHGS3::slave    RHGS3         Active     Changelog Crawl    2016-09-28 13:13:45
RHGS2          rep01         /rhs/brick1/rep01    root          RHGS3::slave    RHGS4         Passive    N/A                N/A


--- Additional comment from Aravinda VK on 2016-09-29 02:20:52 EDT ---

Set the Checkpoint for current time using,

gluster volume geo-replication rep01 RHGS3::slave config checkpoint now

As mentioned in the description we need to validate for other inputs.(other than now)

Comment 1 Worker Ant 2016-11-16 10:14:32 UTC
REVIEW: http://review.gluster.org/15854 (geo-rep/cli: Validate Checkpoint label) posted (#1) for review on release-3.9 by Aravinda VK (avishwan)

Comment 2 Worker Ant 2016-12-01 05:55:44 UTC
COMMIT: http://review.gluster.org/15854 committed in release-3.9 by Aravinda VK (avishwan) 
------
commit 589f8cb8eb7cab5c356decb1c923def2582faabc
Author: Aravinda VK <avishwan>
Date:   Tue Oct 25 12:56:05 2016 +0530

    geo-rep/cli: Validate Checkpoint label
    
    Checkpoint command accepts "now" or any other Time
    in "%Y-%m-%d %H:%M:%S" format as label.
    
    Validation added with this patch for the input label. Checkpoint set
    will fail for invalid label.
    
    > Reviewed-on: http://review.gluster.org/15721
    > Smoke: Gluster Build System <jenkins.org>
    > CentOS-regression: Gluster Build System <jenkins.org>
    > NetBSD-regression: NetBSD Build System <jenkins.org>
    > Reviewed-by: Saravanakumar Arumugam <sarumuga>
    > Reviewed-by: Kotresh HR <khiremat>
    
    BUG: 1395626
    Change-Id: I23518c151ab4b294f64cae3b78baaacb3d8f7b82
    Signed-off-by: Aravinda VK <avishwan>
    (cherry picked from commit 8a1993b32f476765f9f5c9294e7c3f2ae75198a0)
    Reviewed-on: http://review.gluster.org/15854
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Kotresh HR <khiremat>
    Reviewed-by: Saravanakumar Arumugam <sarumuga>

Comment 3 Kaushal 2017-03-08 10:20:15 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.1, please open a new bug report.

glusterfs-3.9.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-January/029725.html
[2] https://www.gluster.org/pipermail/gluster-users/