Bug 1278408 - [Tier]: Volume start failed after tier attach to newly created stopped volume.
Summary: [Tier]: Volume start failed after tier attach to newly created stopped volume.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.1.2
Assignee: Mohammed Rafi KC
QA Contact: surabhi
URL:
Whiteboard:
Depends On:
Blocks: 1260783 1260923
TreeView+ depends on / blocked
 
Reported: 2015-11-05 12:06 UTC by Byreddy
Modified: 2016-09-17 15:37 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.7.5-7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-03-01 05:53:06 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description Byreddy 2015-11-05 12:06:45 UTC
Description of problem:
=======================
Volume start is failing with the error  message "Commit failed" after attaching the tier to the  stopped state newly created volume.


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.5-5


How reproducible:
=================
100%


Steps to Reproduce:
===================
1. Have 3 node cluster with RHGS 3.1.2( glusterfs-3.7.5-5 ) ( node-1, 2 and 3)
2. Created the replica volume (1*2) using node-1 & 2 bricks.
3. Attach tier using the node-3 brick. (Distributed 1*1l
4. start the volume now

Actual results:
===============
Volume not starting and it's failing with error message "Commit failed on <ip> "
here IP is node-3 IP.


Expected results:
=================
Volume should start successfully 

Additional info:
================
volume start is created brick processes in the node-1 and node-2 and not in node-3

Comment 3 Byreddy 2015-11-19 06:58:17 UTC
Pls refer the below bug for more info.
https://bugzilla.redhat.com/show_bug.cgi?id=1279319

Comment 4 Mohammed Rafi KC 2015-11-20 12:04:31 UTC
https://code.engineering.redhat.com/gerrit/#/c/61981/

Comment 5 surabhi 2015-11-26 06:34:06 UTC
Verified the Bug with following steps:

1. Created 1X2 replicate volume using node 1 and node 2
2. Attach tier using the node 3 (distribute)
3. Start the volume now
4. Check the volume status and tier rebalance status

Actual results:
The volume start is successfull after attaching tier to a stopped volume and the rebalance tier starts once the voluem is started.
It shows failed for localhost for which a seperate BZ is present.

Also tried with :

1. Create 2x2 dis-rep volume
2. Attach 2x2 tier
3. Start the volume now
4. Check the volume status and tier rebalance status

Marking the bug as verified on build :

glusterfs-api-3.7.5-7.el7rhgs.x86_64
glusterfs-server-3.7.5-7.el7rhgs.x86_64
samba-vfs-glusterfs-4.2.4-6.el7rhgs.x86_64
glusterfs-libs-3.7.5-7.el7rhgs.x86_64
glusterfs-3.7.5-7.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-7.el7rhgs.x86_64
glusterfs-cli-3.7.5-7.el7rhgs.x86_64
glusterfs-rdma-3.7.5-7.el7rhgs.x86_64
glusterfs-fuse-3.7.5-7.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-7.el7rhgs.x86_64

Comment 7 errata-xmlrpc 2016-03-01 05:53:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.