Bug 970691

Summary: after user was prevented from creating a volume with duplicate bricks from a previously created volume, previously created volume fails to start
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Dustin Tsang <dtsang>
Component: rhscAssignee: Sahina Bose <sabose>
Status: CLOSED NOTABUG QA Contact: Dustin Tsang <dtsang>
Severity: unspecified Docs Contact:
Priority: high    
Version: 2.1CC: dtsang, knarra, mmahoney, mmccune, pprakash, rhs-bugs, sdharane, ssampat
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-07-03 13:43:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
engine log
none
vdsm log from one of the nodes
none
vdsm log from second host none

Description Dustin Tsang 2013-06-04 15:35:02 UTC
Created attachment 756845 [details]
engine log

Description of problem:

after user was prevented from creating a volume with duplicate bricks from a previously created volume,  previously created volume fails to start

Version-Release number of selected component (if applicable):

rhsc-2.1.0-0.bb1.el6rhs.noarch

vdsm-4.10.2-18.0.1.el6rhs.x86_64
glusterfs-3.4.0.8rhs-1.el6rhs.x86_64


How reproducible:
100%

Steps to Reproduce:
1. create a host with 2 nodes
2. create a distributed volume with 3 bricks on each node
3. try to create a distributed volume with the same bricks => fails as expected
4. start the volume from step 2

Actual results:

Volume fails to start and and displays "Could not start Gluster Volume" in rhsc events.

Expected results:

Volume should start.


Additional info:

from the gluster cli on one of the nodes error shown is ---

Failed to get extended attribute trusted.glusterfs.volume-id for brick dir /rhs/brick1/gluster/bricks/distribute/brick1. Reason : No data available

Comment 1 Dustin Tsang 2013-06-04 15:35:50 UTC
Created attachment 756846 [details]
vdsm log from one of the nodes

Comment 2 Dustin Tsang 2013-06-04 15:36:22 UTC
Created attachment 756847 [details]
vdsm log from second host

Comment 4 Sahina Bose 2013-07-03 10:42:36 UTC
Could not reproduce this issue. 
The bricks used to create the volume in the first step, were these tampered with in any way (Is this part of your automation run where the bricks are deleted when volume create fails?)

Could you share your setup?

Comment 5 Dustin Tsang 2013-07-03 13:43:06 UTC
I couldn't reproduce it either. closing the bug.