Bug 1328010 - snapshot-clone: clone volume doesn't start after node reboot
Summary: snapshot-clone: clone volume doesn't start after node reboot
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: snapshot
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Avra Sengupta
QA Contact:
URL:
Whiteboard:
Depends On: 1327165
Blocks: 1329989
TreeView+ depends on / blocked
 
Reported: 2016-04-18 08:47 UTC by Avra Sengupta
Modified: 2016-06-16 14:03 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1327165
: 1329989 (view as bug list)
Environment:
Last Closed: 2016-06-16 14:03:35 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Vijay Bellur 2016-04-18 09:18:33 UTC
REVIEW: http://review.gluster.org/14021 (clone/snapshot: Save restored_from_snap for clones) posted (#1) for review on master by Avra Sengupta (asengupt)

Comment 2 Vijay Bellur 2016-04-25 08:42:49 UTC
COMMIT: http://review.gluster.org/14021 committed in master by Rajesh Joseph (rjoseph) 
------
commit fe6c4efcc66bca84aaceb352de38f0b58b70b780
Author: Avra Sengupta <asengupt>
Date:   Mon Apr 18 14:44:18 2016 +0530

    clone/snapshot: Save restored_from_snap for clones
    
    Bricks of cloned volumes are lvm bricks mounted in
    /run/gluster, which on reboot of the node gets
    cleared. Hence, these brick paths need to be recreated
    on glusterd restart and the appropriate lvms are
    mounted.
    
    Change-Id: I6da086288c0dbdcedf3a20fd53f25e3728bea473
    BUG: 1328010
    Signed-off-by: Avra Sengupta <asengupt>
    Reviewed-on: http://review.gluster.org/14021
    Smoke: Gluster Build System <jenkins.com>
    CentOS-regression: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: Rajesh Joseph <rjoseph>

Comment 3 Niels de Vos 2016-06-16 14:03:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.