Bug 1328010
Summary: | snapshot-clone: clone volume doesn't start after node reboot | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Avra Sengupta <asengupt> | |
Component: | snapshot | Assignee: | Avra Sengupta <asengupt> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
Severity: | urgent | Docs Contact: | ||
Priority: | unspecified | |||
Version: | mainline | CC: | asengupt, ashah, bugs, rjoseph, storage-qa-internal | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.8rc2 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | 1327165 | |||
: | 1329989 (view as bug list) | Environment: | ||
Last Closed: | 2016-06-16 14:03:35 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1327165 | |||
Bug Blocks: | 1329989 |
Comment 1
Vijay Bellur
2016-04-18 09:18:33 UTC
COMMIT: http://review.gluster.org/14021 committed in master by Rajesh Joseph (rjoseph) ------ commit fe6c4efcc66bca84aaceb352de38f0b58b70b780 Author: Avra Sengupta <asengupt> Date: Mon Apr 18 14:44:18 2016 +0530 clone/snapshot: Save restored_from_snap for clones Bricks of cloned volumes are lvm bricks mounted in /run/gluster, which on reboot of the node gets cleared. Hence, these brick paths need to be recreated on glusterd restart and the appropriate lvms are mounted. Change-Id: I6da086288c0dbdcedf3a20fd53f25e3728bea473 BUG: 1328010 Signed-off-by: Avra Sengupta <asengupt> Reviewed-on: http://review.gluster.org/14021 Smoke: Gluster Build System <jenkins.com> CentOS-regression: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Rajesh Joseph <rjoseph> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |