Bug 835494 - Volume creation fails and gives error "<brickname> or a prefix of it is already part of a volume", eventhough that brick is not part of any volume.
Volume creation fails and gives error "<brickname> or a prefix of it is alrea...
Status: CLOSED DEFERRED
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
3.3.0
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: bugs@gluster.org
amainkar
: Triaged
Depends On:
Blocks: 852293
  Show dependency treegraph
 
Reported: 2012-06-26 06:59 EDT by Rachana Patel
Modified: 2015-04-20 07:56 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 852293 (view as bug list)
Environment:
Last Closed: 2014-12-14 14:40:28 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: DP
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
Nth server log (20.00 KB, application/x-tar)
2012-06-26 07:05 EDT, Rachana Patel
no flags Details

  None (edit)
Description Rachana Patel 2012-06-26 06:59:00 EDT
Description of problem:
Once User gets 'Operation failed on <server>' error(reason:-brick/dir is not created on any one server of cluster) and after that S/he creates brick/Dir on that server and retry to create volume with same bricks, then it gives error "<brickname> or a prefix of it is already part of a volume", eventhough that brick is not part of any volume.

Version-Release number of selected component (if applicable):
3.3.0

How reproducible:
always

Steps to Reproduce:
1. create a cluster of N server.
2. create a dir/brick on N-1 server
3. create a volume using bricks created in step 2 but use all N server in volume creation (it will give Operation failed on <server N>
4. create dir/brick on Nth server
5. re run gluster volume creation(same as step-3) and it will give error
<brick> or a prefix of it is already part of a volume

e.g.
1.
[root@dell-pe840-02 vols]# gluster p s
Number of Peers: 3

Hostname: 10.16.65.43
Uuid: 80bdd46f-9dcb-4d26-abec-243c9e42b9aa
State: Peer in Cluster (Connected)

Hostname: 10.16.64.139
Uuid: cccd6a5d-00ea-41c4-9075-d1ae46b031ee
State: Peer in Cluster (Connected)

Hostname: 10.16.71.146
Uuid: 17fa0939-7ab2-4268-a18e-7224ce76aba0
State: Peer in Cluster (Connected)

2.
run 'mkdir -p /kp1/test/t1' on all server except the last one '10.16.71.146'

3.run below command
'gluster volume create kp1test 10.16.64.191:/kp1/test/t1 10.16.65.43:/kp1/test/t1 10.16.64.139:/kp1/test/t1 10.16.71.146:/kp1/test/t1'
it will fail

[root@dell-pe840-02 vols]# gluster volume create kp1test 10.16.64.191:/kp1/test/t1 10.16.65.43:/kp1/test/t1 10.16.64.139:/kp1/test/t1 10.16.71.146:/kp1/test/t1
Operation failed on 10.16.71.146

4. on server '10.16.71.146' run 'mkdir -p /kp1/test/t1'

5. re run gluster volume creation and it will give error
[root@dell-pe840-02 vols]# gluster volume create kp1test 10.16.64.191:/kp1/test/t1 10.16.65.43:/kp1/test/t1 10.16.64.139:/kp1/test/t1 10.16.71.146:/kp1/test/t1
/kp1/test/t1 or a prefix of it is already part of a volume


  
Actual results:
volume craetion is failed

Expected results:
Once User creates brick/Dir on all serveres, Volume creation should not fail(if that brick/Dir is not part of any existing volume)

Additional info:
Comment 1 Rachana Patel 2012-06-26 07:05:47 EDT
Created attachment 594431 [details]
Nth server log
Comment 2 Ladd 2012-06-28 17:24:55 EDT
This is resolvable by removing the attributes from the brick(s) that are failing to add.

use 

setfattr -x trusted.gfid dir/brick
setfattr -x trusted.glusterfs.volume-id dir/brick

on serverN
Comment 3 Amar Tumballi 2012-07-11 02:09:53 EDT
as work-around is available not treating as higher priority. Surely need to document the scenario
Comment 4 Rachana Patel 2013-01-15 07:32:40 EST
work-around is there but if volume has not been created (due to any reason) then it should not set attributes for those briucks
Comment 5 Niels de Vos 2014-11-27 09:53:43 EST
The version that this bug has been reported against, does not get any updates from the Gluster Community anymore. Please verify if this report is still valid against a current (3.4, 3.5 or 3.6) release and update the version, or close this bug.

If there has been no update before 9 December 2014, this bug will get automatocally closed.

Note You need to log in before you can comment on or make changes to this bug.