Bug 1278408

Summary: [Tier]: Volume start failed after tier attach to newly created stopped volume.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Byreddy <bsrirama>
Component: tierAssignee: Mohammed Rafi KC <rkavunga>
Status: CLOSED ERRATA QA Contact: surabhi <sbhaloth>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: dlambrig, rcyriac, rhs-bugs, rkavunga, sankarshan, sbhaloth, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.1.2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.7.5-7 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-03-01 05:53:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1260783, 1260923    

Description Byreddy 2015-11-05 12:06:45 UTC
Description of problem:
=======================
Volume start is failing with the error  message "Commit failed" after attaching the tier to the  stopped state newly created volume.


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.5-5


How reproducible:
=================
100%


Steps to Reproduce:
===================
1. Have 3 node cluster with RHGS 3.1.2( glusterfs-3.7.5-5 ) ( node-1, 2 and 3)
2. Created the replica volume (1*2) using node-1 & 2 bricks.
3. Attach tier using the node-3 brick. (Distributed 1*1l
4. start the volume now

Actual results:
===============
Volume not starting and it's failing with error message "Commit failed on <ip> "
here IP is node-3 IP.


Expected results:
=================
Volume should start successfully 

Additional info:
================
volume start is created brick processes in the node-1 and node-2 and not in node-3

Comment 3 Byreddy 2015-11-19 06:58:17 UTC
Pls refer the below bug for more info.
https://bugzilla.redhat.com/show_bug.cgi?id=1279319

Comment 4 Mohammed Rafi KC 2015-11-20 12:04:31 UTC
https://code.engineering.redhat.com/gerrit/#/c/61981/

Comment 5 surabhi 2015-11-26 06:34:06 UTC
Verified the Bug with following steps:

1. Created 1X2 replicate volume using node 1 and node 2
2. Attach tier using the node 3 (distribute)
3. Start the volume now
4. Check the volume status and tier rebalance status

Actual results:
The volume start is successfull after attaching tier to a stopped volume and the rebalance tier starts once the voluem is started.
It shows failed for localhost for which a seperate BZ is present.

Also tried with :

1. Create 2x2 dis-rep volume
2. Attach 2x2 tier
3. Start the volume now
4. Check the volume status and tier rebalance status

Marking the bug as verified on build :

glusterfs-api-3.7.5-7.el7rhgs.x86_64
glusterfs-server-3.7.5-7.el7rhgs.x86_64
samba-vfs-glusterfs-4.2.4-6.el7rhgs.x86_64
glusterfs-libs-3.7.5-7.el7rhgs.x86_64
glusterfs-3.7.5-7.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-7.el7rhgs.x86_64
glusterfs-cli-3.7.5-7.el7rhgs.x86_64
glusterfs-rdma-3.7.5-7.el7rhgs.x86_64
glusterfs-fuse-3.7.5-7.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-7.el7rhgs.x86_64

Comment 7 errata-xmlrpc 2016-03-01 05:53:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html