Bug 835494 - Volume creation fails and gives error "<brickname> or a prefix of it is already part of a volume", eventhough that brick is not part of any volume.
Summary: Volume creation fails and gives error "<brickname> or a prefix of it is alrea...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.3.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact: amainkar
URL:
Whiteboard:
Depends On:
Blocks: 852293
TreeView+ depends on / blocked
 
Reported: 2012-06-26 10:59 UTC by Rachana Patel
Modified: 2015-04-20 11:56 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 852293 (view as bug list)
Environment:
Last Closed: 2014-12-14 19:40:28 UTC
Regression: ---
Mount Type: ---
Documentation: DP
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
Nth server log (20.00 KB, application/x-tar)
2012-06-26 11:05 UTC, Rachana Patel
no flags Details

Description Rachana Patel 2012-06-26 10:59:00 UTC
Description of problem:
Once User gets 'Operation failed on <server>' error(reason:-brick/dir is not created on any one server of cluster) and after that S/he creates brick/Dir on that server and retry to create volume with same bricks, then it gives error "<brickname> or a prefix of it is already part of a volume", eventhough that brick is not part of any volume.

Version-Release number of selected component (if applicable):
3.3.0

How reproducible:
always

Steps to Reproduce:
1. create a cluster of N server.
2. create a dir/brick on N-1 server
3. create a volume using bricks created in step 2 but use all N server in volume creation (it will give Operation failed on <server N>
4. create dir/brick on Nth server
5. re run gluster volume creation(same as step-3) and it will give error
<brick> or a prefix of it is already part of a volume

e.g.
1.
[root@dell-pe840-02 vols]# gluster p s
Number of Peers: 3

Hostname: 10.16.65.43
Uuid: 80bdd46f-9dcb-4d26-abec-243c9e42b9aa
State: Peer in Cluster (Connected)

Hostname: 10.16.64.139
Uuid: cccd6a5d-00ea-41c4-9075-d1ae46b031ee
State: Peer in Cluster (Connected)

Hostname: 10.16.71.146
Uuid: 17fa0939-7ab2-4268-a18e-7224ce76aba0
State: Peer in Cluster (Connected)

2.
run 'mkdir -p /kp1/test/t1' on all server except the last one '10.16.71.146'

3.run below command
'gluster volume create kp1test 10.16.64.191:/kp1/test/t1 10.16.65.43:/kp1/test/t1 10.16.64.139:/kp1/test/t1 10.16.71.146:/kp1/test/t1'
it will fail

[root@dell-pe840-02 vols]# gluster volume create kp1test 10.16.64.191:/kp1/test/t1 10.16.65.43:/kp1/test/t1 10.16.64.139:/kp1/test/t1 10.16.71.146:/kp1/test/t1
Operation failed on 10.16.71.146

4. on server '10.16.71.146' run 'mkdir -p /kp1/test/t1'

5. re run gluster volume creation and it will give error
[root@dell-pe840-02 vols]# gluster volume create kp1test 10.16.64.191:/kp1/test/t1 10.16.65.43:/kp1/test/t1 10.16.64.139:/kp1/test/t1 10.16.71.146:/kp1/test/t1
/kp1/test/t1 or a prefix of it is already part of a volume


  
Actual results:
volume craetion is failed

Expected results:
Once User creates brick/Dir on all serveres, Volume creation should not fail(if that brick/Dir is not part of any existing volume)

Additional info:

Comment 1 Rachana Patel 2012-06-26 11:05:47 UTC
Created attachment 594431 [details]
Nth server log

Comment 2 Ladd 2012-06-28 21:24:55 UTC
This is resolvable by removing the attributes from the brick(s) that are failing to add.

use 

setfattr -x trusted.gfid dir/brick
setfattr -x trusted.glusterfs.volume-id dir/brick

on serverN

Comment 3 Amar Tumballi 2012-07-11 06:09:53 UTC
as work-around is available not treating as higher priority. Surely need to document the scenario

Comment 4 Rachana Patel 2013-01-15 12:32:40 UTC
work-around is there but if volume has not been created (due to any reason) then it should not set attributes for those briucks

Comment 5 Niels de Vos 2014-11-27 14:53:43 UTC
The version that this bug has been reported against, does not get any updates from the Gluster Community anymore. Please verify if this report is still valid against a current (3.4, 3.5 or 3.6) release and update the version, or close this bug.

If there has been no update before 9 December 2014, this bug will get automatocally closed.


Note You need to log in before you can comment on or make changes to this bug.