Bug 1224175

Summary: Glusterd fails to start after volume restore, tier attach and node reboot
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Bhaskarakiran <byarlaga>
Component: snapshotAssignee: Avra Sengupta <asengupt>
Status: CLOSED ERRATA QA Contact: storage-qa-internal <storage-qa-internal>
Severity: unspecified Docs Contact:
Priority: medium    
Version: rhgs-3.1CC: asengupt, ashah, asrivast, byarlaga, mzywusko, rhs-bugs, rjoseph, senaik, storage-qa-internal
Target Milestone: ---Keywords: Triaged
Target Release: RHGS 3.1.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.7.1-2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1227646 (view as bug list) Environment:
Last Closed: 2015-07-29 04:48:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1202842, 1223636, 1227646, 1228592    

Description Bhaskarakiran 2015-05-22 10:02:16 UTC
Description of problem:
=======================

Glusterd failed to start (Reason : Failed to recreate all snap brick mounts
) after performing below operations:

1. volume snapshot
2. Restore the volume
3. Attach the tier - replicate-2
4. Reboot the node

Version-Release number of selected component (if applicable):
=============================================================
[root@transformers ~]# gluster --version
glusterfs 3.7.0 built on May 15 2015 01:31:12
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@transformers ~]# 

How reproducible:
================
100%

Steps to Reproduce:
As in description


Actual results:
glusterd fails to start

Expected results:


Additional info:
sosreport of the failed node will nbe attached.

Comment 5 senaik 2015-06-18 13:53:46 UTC
Version : glusterfs-3.7.1-3.el6rhs.x86_64

Steps followed :
==============

1) Created a 6x3 dist rep volume 

2) Fuse and NFS mount the volume and create some data 

3) Create 2 snapshots and activate them 

4) Restore volume to one of the snapshots 
 gluster snapshot restore Snap1234_GMT-2015.06.18-13.37.13
Restore operation will replace the original volume with the snapshotted volume. Do you still want to continue? (y/n) y
Snapshot restore: Snap1234_GMT-2015.06.18-13.37.13: Snap restored successfully

5) Attach a replica 2 tier 
gluster v attach-tier vol0 replica 2 rhs-arch-srv3.lab.eng.blr.redhat.com:/rhs/brick6/b6 inception.lab.eng.blr.redhat.com:/rhs/brick11/b11
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
volume rebalance: vol0: failed: Volume vol0 needs to be started to perform rebalance
Failed to run tier start. Please execute tier start command explictly
Usage : gluster volume rebalance <volname> tier start

6) Perform a rebalance tier start 
 gluster volume rebalance vol0 tier start
volume rebalance: vol0: failed: Volume vol0 needs to be started to perform rebalance
[root@inception ~]# gluster v start vol0 
volume start: vol0: success
[root@inception ~]# gluster volume rebalance vol0 tier start
volume rebalance: vol0: success: Rebalance on vol0 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 70367c70-ee2d-4f6a-adca-0887b680f049

7) Reboot Node1 Node2 and Node4 

When nodes are rebooted check glusterd on all nodes. glusterd is up and running on all nodes.
Create another snapshot and activate. It is successful 
gluster snapshot create new vol0
snapshot create: success: Snap new_GMT-2015.06.18-13.46.22 created successfully
[root@inception ~]# gluster snapshot activate new_GMT-2015.06.18-13.46.22
Snapshot activate: new_GMT-2015.06.18-13.46.22: Snap activated successfully

Marking the bug 'Verified'

Comment 6 errata-xmlrpc 2015-07-29 04:48:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html