Bug 1327165
Summary: | snapshot-clone: clone volume doesn't start after node reboot | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Anil Shah <ashah> | |
Component: | snapshot | Assignee: | Avra Sengupta <asengupt> | |
Status: | CLOSED ERRATA | QA Contact: | Anil Shah <ashah> | |
Severity: | urgent | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.1 | CC: | asengupt, asrivast, rhinduja, rhs-bugs, rjoseph, storage-qa-internal | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 3.1.3 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.7.9-3 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1328010 (view as bug list) | Environment: | ||
Last Closed: | 2016-06-23 05:17:43 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1311817, 1328010, 1329989 |
Description
Anil Shah
2016-04-14 11:33:25 UTC
Patch sent upstream Master: http://review.gluster.org/14021 Release-3.7: http://review.gluster.org/14059 Downstream patch: https://code.engineering.redhat.com/gerrit/73089 [root@dhcp46-4 ~]# gluster snapshot create snap1 vol no-timestamp snapshot create: success: Snap snap1 created successfully [root@dhcp46-4 ~]# gluster snapshot list snap1 [root@dhcp46-4 ~]# gluster snapshot activate snap1 Snapshot activate: snap1: Snap activated successfully [root@dhcp46-4 ~]# gluster snapshot clone clone1 snap1 snapshot clone: success: Clone clone1 created successfully [root@dhcp46-4 ~]# gluster v info Volume Name: clone1 Type: Distributed-Replicate Volume ID: 0d86adee-2662-4223-9729-71f7dd3c004b Status: Created Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.46.4:/run/gluster/snaps/clone1/brick1/b1 Brick2: 10.70.47.46:/run/gluster/snaps/clone1/brick2/b2 Brick3: 10.70.46.213:/run/gluster/snaps/clone1/brick3/b3 Brick4: 10.70.46.148:/run/gluster/snaps/clone1/brick4/b4 Options Reconfigured: features.scrub: Active features.bitrot: on features.quota-deem-statfs: on features.inode-quota: on features.quota: on performance.readdir-ahead: on [root@dhcp46-4 ~]# gluster v start clone1 volume start: clone1: success [root@dhcp46-4 ~]# init 6 Connection to 10.70.46.4 closed by remote host. Connection to 10.70.46.4 closed. [ashah@localhost ~]$ ssh root.46.4 root.46.4's password: Last login: Tue May 3 20:37:06 2016 [root@dhcp46-4 ~]# gluster v info clone1 Volume Name: clone1 Type: Distributed-Replicate Volume ID: 0d86adee-2662-4223-9729-71f7dd3c004b Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.46.4:/run/gluster/snaps/clone1/brick1/b1 Brick2: 10.70.47.46:/run/gluster/snaps/clone1/brick2/b2 Brick3: 10.70.46.213:/run/gluster/snaps/clone1/brick3/b3 Brick4: 10.70.46.148:/run/gluster/snaps/clone1/brick4/b4 Options Reconfigured: performance.readdir-ahead: on features.quota: on features.inode-quota: on features.quota-deem-statfs: on features.bitrot: on features.scrub: Active Bug verified on build glusterfs-3.7.9-3.el7rhgs.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240 |