Description of problem: Once User gets 'Operation failed on <server>' error(reason:-brick/dir is not created on any one server of cluster) and after that S/he creates brick/Dir on that server and retry to create volume with same bricks, then it gives error "<brickname> or a prefix of it is already part of a volume", eventhough that brick is not part of any volume. Version-Release number of selected component (if applicable): 3.3.0 How reproducible: always Steps to Reproduce: 1. create a cluster of N server. 2. create a dir/brick on N-1 server 3. create a volume using bricks created in step 2 but use all N server in volume creation (it will give Operation failed on <server N> 4. create dir/brick on Nth server 5. re run gluster volume creation(same as step-3) and it will give error <brick> or a prefix of it is already part of a volume e.g. 1. [root@dell-pe840-02 vols]# gluster p s Number of Peers: 3 Hostname: 10.16.65.43 Uuid: 80bdd46f-9dcb-4d26-abec-243c9e42b9aa State: Peer in Cluster (Connected) Hostname: 10.16.64.139 Uuid: cccd6a5d-00ea-41c4-9075-d1ae46b031ee State: Peer in Cluster (Connected) Hostname: 10.16.71.146 Uuid: 17fa0939-7ab2-4268-a18e-7224ce76aba0 State: Peer in Cluster (Connected) 2. run 'mkdir -p /kp1/test/t1' on all server except the last one '10.16.71.146' 3.run below command 'gluster volume create kp1test 10.16.64.191:/kp1/test/t1 10.16.65.43:/kp1/test/t1 10.16.64.139:/kp1/test/t1 10.16.71.146:/kp1/test/t1' it will fail [root@dell-pe840-02 vols]# gluster volume create kp1test 10.16.64.191:/kp1/test/t1 10.16.65.43:/kp1/test/t1 10.16.64.139:/kp1/test/t1 10.16.71.146:/kp1/test/t1 Operation failed on 10.16.71.146 4. on server '10.16.71.146' run 'mkdir -p /kp1/test/t1' 5. re run gluster volume creation and it will give error [root@dell-pe840-02 vols]# gluster volume create kp1test 10.16.64.191:/kp1/test/t1 10.16.65.43:/kp1/test/t1 10.16.64.139:/kp1/test/t1 10.16.71.146:/kp1/test/t1 /kp1/test/t1 or a prefix of it is already part of a volume Actual results: volume craetion is failed Expected results: Once User creates brick/Dir on all serveres, Volume creation should not fail(if that brick/Dir is not part of any existing volume) Additional info:
Created attachment 594431 [details] Nth server log
This is resolvable by removing the attributes from the brick(s) that are failing to add. use setfattr -x trusted.gfid dir/brick setfattr -x trusted.glusterfs.volume-id dir/brick on serverN
as work-around is available not treating as higher priority. Surely need to document the scenario
work-around is there but if volume has not been created (due to any reason) then it should not set attributes for those briucks
The version that this bug has been reported against, does not get any updates from the Gluster Community anymore. Please verify if this report is still valid against a current (3.4, 3.5 or 3.6) release and update the version, or close this bug. If there has been no update before 9 December 2014, this bug will get automatocally closed.