Description of problem: ======================= Glusterd failed to start (Reason : Failed to recreate all snap brick mounts ) after performing below operations: 1. volume snapshot 2. Restore the volume 3. Attach the tier - replicate-2 4. Reboot the node Version-Release number of selected component (if applicable): ============================================================= [root@transformers ~]# gluster --version glusterfs 3.7.0 built on May 15 2015 01:31:12 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@transformers ~]# How reproducible: ================ 100% Steps to Reproduce: As in description Actual results: glusterd fails to start Expected results: Additional info: sosreport of the failed node will nbe attached.
Master Url: http://review.gluster.org/#/c/11060/ Release 3.7 Url: http://review.gluster.org/#/c/11100/ RHGS 3.1 Url: https://code.engineering.redhat.com/gerrit/#/c/50360/
Version : glusterfs-3.7.1-3.el6rhs.x86_64 Steps followed : ============== 1) Created a 6x3 dist rep volume 2) Fuse and NFS mount the volume and create some data 3) Create 2 snapshots and activate them 4) Restore volume to one of the snapshots gluster snapshot restore Snap1234_GMT-2015.06.18-13.37.13 Restore operation will replace the original volume with the snapshotted volume. Do you still want to continue? (y/n) y Snapshot restore: Snap1234_GMT-2015.06.18-13.37.13: Snap restored successfully 5) Attach a replica 2 tier gluster v attach-tier vol0 replica 2 rhs-arch-srv3.lab.eng.blr.redhat.com:/rhs/brick6/b6 inception.lab.eng.blr.redhat.com:/rhs/brick11/b11 Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y volume attach-tier: success volume rebalance: vol0: failed: Volume vol0 needs to be started to perform rebalance Failed to run tier start. Please execute tier start command explictly Usage : gluster volume rebalance <volname> tier start 6) Perform a rebalance tier start gluster volume rebalance vol0 tier start volume rebalance: vol0: failed: Volume vol0 needs to be started to perform rebalance [root@inception ~]# gluster v start vol0 volume start: vol0: success [root@inception ~]# gluster volume rebalance vol0 tier start volume rebalance: vol0: success: Rebalance on vol0 has been started successfully. Use rebalance status command to check status of the rebalance process. ID: 70367c70-ee2d-4f6a-adca-0887b680f049 7) Reboot Node1 Node2 and Node4 When nodes are rebooted check glusterd on all nodes. glusterd is up and running on all nodes. Create another snapshot and activate. It is successful gluster snapshot create new vol0 snapshot create: success: Snap new_GMT-2015.06.18-13.46.22 created successfully [root@inception ~]# gluster snapshot activate new_GMT-2015.06.18-13.46.22 Snapshot activate: new_GMT-2015.06.18-13.46.22: Snap activated successfully Marking the bug 'Verified'
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html