Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1181418

Summary: [SNAPSHOT]: Snapshot restore fails after adding a node to master with geo-replication involved
Product: [Community] GlusterFS Reporter: Avra Sengupta <asengupt>
Component: snapshotAssignee: Avra Sengupta <asengupt>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: pre-releaseCC: asengupt, bugs, gluster-bugs, rcyriac, smanjara, spandit, storage-qa-internal
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: snapshot
Fixed In Version: glusterfs-3.7.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1180560 Environment:
Last Closed: 2015-05-14 17:28:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1180560    
Bug Blocks:    

Comment 1 Anand Avati 2015-01-13 12:22:43 UTC
REVIEW: http://review.gluster.org/9441 (glusterd/snap: Fix restore cleanup) posted (#1) for review on master by Avra Sengupta (asengupt)

Comment 2 Anand Avati 2015-01-22 08:33:28 UTC
REVIEW: http://review.gluster.org/9441 (glusterd/snap: Fix restore cleanup) posted (#2) for review on master by Avra Sengupta (asengupt)

Comment 3 Anand Avati 2015-01-22 08:58:19 UTC
REVIEW: http://review.gluster.org/9441 (glusterd/snap: Fix restore cleanup) posted (#3) for review on master by Avra Sengupta (asengupt)

Comment 4 Anand Avati 2015-01-27 07:26:06 UTC
COMMIT: http://review.gluster.org/9441 committed in master by Kaushal M (kaushal) 
------
commit bf227251eadcc35a102fc9db0c39e36b7336954d
Author: Avra Sengupta <asengupt>
Date:   Tue Jan 13 09:31:18 2015 +0000

    glusterd/snap: Fix restore cleanup
    
    If restore commit is successful on the originator and
    a few nodes, but fails on some other node, restore cleanup
    should restate the volume and the snapshot in question
    as it was before the command was run.
    
    Change-Id: I7bb0becc7f052f55bc818018bc84770944e76c80
    BUG: 1181418
    Signed-off-by: Avra Sengupta <asengupt>
    Reviewed-on: http://review.gluster.org/9441
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Rajesh Joseph <rjoseph>
    Reviewed-by: Atin Mukherjee <amukherj>
    Reviewed-by: Kaushal M <kaushal>

Comment 5 Anand Avati 2015-01-27 07:59:03 UTC
REVIEW: http://review.gluster.org/9489 (glusterd/snapshot: Ignore failure to copy geo-rep files.) posted (#1) for review on master by Avra Sengupta (asengupt)

Comment 6 Anand Avati 2015-01-27 11:22:56 UTC
COMMIT: http://review.gluster.org/9489 committed in master by Krishnan Parthasarathi (kparthas) 
------
commit e39d80f9921c6fbfe084bdb66f95532794fc6aca
Author: Avra Sengupta <asengupt>
Date:   Tue Jan 27 07:57:27 2015 +0000

    glusterd/snapshot: Ignore failure to copy geo-rep files.
    
    In case a new node is added to the peer, after a snapshot was
    taken, the geo-rep files are not synced to that node. This
    leads to the failure of snapshot restore. Hence, ignoring the
    missing geo-rep files in the new node, and proceeding with
    snapshot restore. Once the restore is successful, the missing
    geo-rep files can be generated with "gluster volume geo-rep
    <master-vol> <slave-vol> create push-pem force"
    
    Change-Id: I1c364f8aefdd6c99b0b861b6d0cb33709ec39da2
    BUG: 1181418
    Signed-off-by: Avra Sengupta <asengupt>
    Reviewed-on: http://review.gluster.org/9489
    Reviewed-by: Sachin Pandit <spandit>
    Reviewed-by: Aravinda VK <avishwan>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Krishnan Parthasarathi <kparthas>

Comment 7 Niels de Vos 2015-05-14 17:28:59 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 8 Niels de Vos 2015-05-14 17:35:48 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 9 Niels de Vos 2015-05-14 17:38:10 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 10 Niels de Vos 2015-05-14 17:45:36 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user