Description of problem: After creating clone from snapshot, restart one of the storage, clone volume doesn't come up. 'gluster volume info' shows status as created Version-Release number of selected component (if applicable): glusterfs-3.7.9-1.el7rhgs.x86_64 How reproducible: 100% Steps to Reproduce: 1. create 2*2 distribute volume 2. create snapshot and activate it 3. Create clone of the snapshot 4. Reboot one of the storage node Actual results: After node reboot, clone volume doesn't come up Expected results: Clone volume should come up after node restart. Additional info: After node reboot: ===================================== [root@dhcp46-4 ~]# gluster v info Volume Name: clone1 Type: Distributed-Replicate Volume ID: 1a859406-79aa-472a-bd28-71ea5091532a Status: Created Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.46.4:/run/gluster/snaps/clone1/brick1/b1 Brick2: 10.70.47.46:/run/gluster/snaps/clone1/brick2/b2 Brick3: 10.70.46.213:/run/gluster/snaps/clone1/brick3/b3 Brick4: 10.70.46.148:/run/gluster/snaps/clone1/brick4/b4 Options Reconfigured: cluster.entry-change-log: enable changelog.capture-del-path: on changelog.changelog: on storage.build-pgfid: on performance.readdir-ahead: on before node reboot ================================== [root@dhcp46-4 ~]# gluster v info Volume Name: clone1 Type: Distributed-Replicate Volume ID: 1a859406-79aa-472a-bd28-71ea5091532a Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.46.4:/run/gluster/snaps/clone1/brick1/b1 Brick2: 10.70.47.46:/run/gluster/snaps/clone1/brick2/b2 Brick3: 10.70.46.213:/run/gluster/snaps/clone1/brick3/b3 Brick4: 10.70.46.148:/run/gluster/snaps/clone1/brick4/b4 Options Reconfigured: cluster.entry-change-log: enable changelog.capture-del-path: on changelog.changelog: on storage.build-pgfid: on performance.readdir-ahead: on
Patch sent upstream Master: http://review.gluster.org/14021 Release-3.7: http://review.gluster.org/14059
Downstream patch: https://code.engineering.redhat.com/gerrit/73089
[root@dhcp46-4 ~]# gluster snapshot create snap1 vol no-timestamp snapshot create: success: Snap snap1 created successfully [root@dhcp46-4 ~]# gluster snapshot list snap1 [root@dhcp46-4 ~]# gluster snapshot activate snap1 Snapshot activate: snap1: Snap activated successfully [root@dhcp46-4 ~]# gluster snapshot clone clone1 snap1 snapshot clone: success: Clone clone1 created successfully [root@dhcp46-4 ~]# gluster v info Volume Name: clone1 Type: Distributed-Replicate Volume ID: 0d86adee-2662-4223-9729-71f7dd3c004b Status: Created Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.46.4:/run/gluster/snaps/clone1/brick1/b1 Brick2: 10.70.47.46:/run/gluster/snaps/clone1/brick2/b2 Brick3: 10.70.46.213:/run/gluster/snaps/clone1/brick3/b3 Brick4: 10.70.46.148:/run/gluster/snaps/clone1/brick4/b4 Options Reconfigured: features.scrub: Active features.bitrot: on features.quota-deem-statfs: on features.inode-quota: on features.quota: on performance.readdir-ahead: on [root@dhcp46-4 ~]# gluster v start clone1 volume start: clone1: success [root@dhcp46-4 ~]# init 6 Connection to 10.70.46.4 closed by remote host. Connection to 10.70.46.4 closed. [ashah@localhost ~]$ ssh root.46.4 root.46.4's password: Last login: Tue May 3 20:37:06 2016 [root@dhcp46-4 ~]# gluster v info clone1 Volume Name: clone1 Type: Distributed-Replicate Volume ID: 0d86adee-2662-4223-9729-71f7dd3c004b Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.46.4:/run/gluster/snaps/clone1/brick1/b1 Brick2: 10.70.47.46:/run/gluster/snaps/clone1/brick2/b2 Brick3: 10.70.46.213:/run/gluster/snaps/clone1/brick3/b3 Brick4: 10.70.46.148:/run/gluster/snaps/clone1/brick4/b4 Options Reconfigured: performance.readdir-ahead: on features.quota: on features.inode-quota: on features.quota-deem-statfs: on features.bitrot: on features.scrub: Active Bug verified on build glusterfs-3.7.9-3.el7rhgs.x86_64
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240