Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1224175 - Glusterd fails to start after volume restore, tier attach and node reboot
Glusterd fails to start after volume restore, tier attach and node reboot
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: snapshot (Show other bugs)
3.1
Unspecified Unspecified
medium Severity unspecified
: ---
: RHGS 3.1.0
Assigned To: Avra Sengupta
storage-qa-internal@redhat.com
: Triaged
Depends On:
Blocks: 1202842 1223636 1227646 1228592
  Show dependency treegraph
 
Reported: 2015-05-22 06:02 EDT by Bhaskarakiran
Modified: 2016-11-23 18:11 EST (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.7.1-2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1227646 (view as bug list)
Environment:
Last Closed: 2015-07-29 00:48:38 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 04:26:26 EDT

  None (edit)
Description Bhaskarakiran 2015-05-22 06:02:16 EDT
Description of problem:
=======================

Glusterd failed to start (Reason : Failed to recreate all snap brick mounts
) after performing below operations:

1. volume snapshot
2. Restore the volume
3. Attach the tier - replicate-2
4. Reboot the node

Version-Release number of selected component (if applicable):
=============================================================
[root@transformers ~]# gluster --version
glusterfs 3.7.0 built on May 15 2015 01:31:12
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@transformers ~]# 

How reproducible:
================
100%

Steps to Reproduce:
As in description


Actual results:
glusterd fails to start

Expected results:


Additional info:
sosreport of the failed node will nbe attached.
Comment 5 senaik 2015-06-18 09:53:46 EDT
Version : glusterfs-3.7.1-3.el6rhs.x86_64

Steps followed :
==============

1) Created a 6x3 dist rep volume 

2) Fuse and NFS mount the volume and create some data 

3) Create 2 snapshots and activate them 

4) Restore volume to one of the snapshots 
 gluster snapshot restore Snap1234_GMT-2015.06.18-13.37.13
Restore operation will replace the original volume with the snapshotted volume. Do you still want to continue? (y/n) y
Snapshot restore: Snap1234_GMT-2015.06.18-13.37.13: Snap restored successfully

5) Attach a replica 2 tier 
gluster v attach-tier vol0 replica 2 rhs-arch-srv3.lab.eng.blr.redhat.com:/rhs/brick6/b6 inception.lab.eng.blr.redhat.com:/rhs/brick11/b11
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
volume rebalance: vol0: failed: Volume vol0 needs to be started to perform rebalance
Failed to run tier start. Please execute tier start command explictly
Usage : gluster volume rebalance <volname> tier start

6) Perform a rebalance tier start 
 gluster volume rebalance vol0 tier start
volume rebalance: vol0: failed: Volume vol0 needs to be started to perform rebalance
[root@inception ~]# gluster v start vol0 
volume start: vol0: success
[root@inception ~]# gluster volume rebalance vol0 tier start
volume rebalance: vol0: success: Rebalance on vol0 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 70367c70-ee2d-4f6a-adca-0887b680f049

7) Reboot Node1 Node2 and Node4 

When nodes are rebooted check glusterd on all nodes. glusterd is up and running on all nodes.
Create another snapshot and activate. It is successful 
gluster snapshot create new vol0
snapshot create: success: Snap new_GMT-2015.06.18-13.46.22 created successfully
[root@inception ~]# gluster snapshot activate new_GMT-2015.06.18-13.46.22
Snapshot activate: new_GMT-2015.06.18-13.46.22: Snap activated successfully

Marking the bug 'Verified'
Comment 6 errata-xmlrpc 2015-07-29 00:48:38 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html

Note You need to log in before you can comment on or make changes to this bug.