Bug 1327165 - snapshot-clone: clone volume doesn't start after node reboot
Summary: snapshot-clone: clone volume doesn't start after node reboot
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.1.3
Assignee: Avra Sengupta
QA Contact: Anil Shah
URL:
Whiteboard:
Depends On:
Blocks: 1311817 1328010 1329989
TreeView+ depends on / blocked
 
Reported: 2016-04-14 11:33 UTC by Anil Shah
Modified: 2016-09-17 13:05 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.7.9-3
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1328010 (view as bug list)
Environment:
Last Closed: 2016-06-23 05:17:43 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1240 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 Update 3 2016-06-23 08:51:28 UTC

Description Anil Shah 2016-04-14 11:33:25 UTC
Description of problem:

After creating clone from snapshot, restart one of the storage, clone volume doesn't come up. 'gluster volume info' shows status as created


Version-Release number of selected component (if applicable):

glusterfs-3.7.9-1.el7rhgs.x86_64


How reproducible:

100%

Steps to Reproduce:
1. create 2*2 distribute volume
2. create snapshot and activate it
3. Create clone of the snapshot 
4. Reboot one of the storage node

Actual results:

After node reboot, clone volume doesn't come up

Expected results:

Clone volume should come up after node restart.

Additional info:

After node reboot:
=====================================
[root@dhcp46-4 ~]# gluster v info
 
Volume Name: clone1
Type: Distributed-Replicate
Volume ID: 1a859406-79aa-472a-bd28-71ea5091532a
Status: Created
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.46.4:/run/gluster/snaps/clone1/brick1/b1
Brick2: 10.70.47.46:/run/gluster/snaps/clone1/brick2/b2
Brick3: 10.70.46.213:/run/gluster/snaps/clone1/brick3/b3
Brick4: 10.70.46.148:/run/gluster/snaps/clone1/brick4/b4
Options Reconfigured:
cluster.entry-change-log: enable
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on

before node reboot
==================================
[root@dhcp46-4 ~]# gluster v info
 
Volume Name: clone1
Type: Distributed-Replicate
Volume ID: 1a859406-79aa-472a-bd28-71ea5091532a
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.46.4:/run/gluster/snaps/clone1/brick1/b1
Brick2: 10.70.47.46:/run/gluster/snaps/clone1/brick2/b2
Brick3: 10.70.46.213:/run/gluster/snaps/clone1/brick3/b3
Brick4: 10.70.46.148:/run/gluster/snaps/clone1/brick4/b4
Options Reconfigured:
cluster.entry-change-log: enable
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on

Comment 4 rjoseph 2016-04-25 10:08:21 UTC
Patch sent upstream

Master: http://review.gluster.org/14021
Release-3.7: http://review.gluster.org/14059

Comment 5 rjoseph 2016-04-27 06:06:00 UTC
Downstream patch: https://code.engineering.redhat.com/gerrit/73089

Comment 7 Anil Shah 2016-05-03 10:18:12 UTC
[root@dhcp46-4 ~]# gluster snapshot create snap1 vol no-timestamp
snapshot create: success: Snap snap1 created successfully
[root@dhcp46-4 ~]# gluster snapshot list
snap1
[root@dhcp46-4 ~]# gluster snapshot activate snap1
Snapshot activate: snap1: Snap activated successfully
[root@dhcp46-4 ~]# gluster snapshot clone clone1 snap1
snapshot clone: success: Clone clone1 created successfully


[root@dhcp46-4 ~]# gluster  v info
 
Volume Name: clone1
Type: Distributed-Replicate
Volume ID: 0d86adee-2662-4223-9729-71f7dd3c004b
Status: Created
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.46.4:/run/gluster/snaps/clone1/brick1/b1
Brick2: 10.70.47.46:/run/gluster/snaps/clone1/brick2/b2
Brick3: 10.70.46.213:/run/gluster/snaps/clone1/brick3/b3
Brick4: 10.70.46.148:/run/gluster/snaps/clone1/brick4/b4
Options Reconfigured:
features.scrub: Active
features.bitrot: on
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
 
[root@dhcp46-4 ~]# gluster v start clone1
volume start: clone1: success

[root@dhcp46-4 ~]# init 6
Connection to 10.70.46.4 closed by remote host.
Connection to 10.70.46.4 closed.

[ashah@localhost ~]$ ssh root.46.4
root.46.4's password: 
Last login: Tue May  3 20:37:06 2016

[root@dhcp46-4 ~]# gluster v info clone1
 
Volume Name: clone1
Type: Distributed-Replicate
Volume ID: 0d86adee-2662-4223-9729-71f7dd3c004b
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.46.4:/run/gluster/snaps/clone1/brick1/b1
Brick2: 10.70.47.46:/run/gluster/snaps/clone1/brick2/b2
Brick3: 10.70.46.213:/run/gluster/snaps/clone1/brick3/b3
Brick4: 10.70.46.148:/run/gluster/snaps/clone1/brick4/b4
Options Reconfigured:
performance.readdir-ahead: on
features.quota: on
features.inode-quota: on
features.quota-deem-statfs: on
features.bitrot: on
features.scrub: Active


Bug verified on build glusterfs-3.7.9-3.el7rhgs.x86_64

Comment 9 errata-xmlrpc 2016-06-23 05:17:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240


Note You need to log in before you can comment on or make changes to this bug.