Description of problem: Non-existing brick was used at "add-brick" command. The command failed. When it was repeated with the correct brick it failed again with message that the other brick from the command "is already part of a volume". All calls to "remove-brick" failed. It seems the one option is left is to delete volume. It's not appropriate if the volume has data. Version-Release number of selected component (if applicable): GlusterFS 3.6.2 How reproducible: Steps to Reproduce: 1. Have disperse volume: [root@SC92 log]# gluster volume info dv3 Volume Name: dv3 Type: Disperse Volume ID: 9547a2c0-1136-4fc9-915f-47d016a30484 Status: Started Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: 10.10.60.182:/exports/182-ts3/dv3 Brick2: 10.10.60.90:/exports/90-ts3/dv3 Brick3: 10.10.60.92:/exports/92-ts3/dv3 Options Reconfigured: snap-activate-on-create: enable 2. Issue command "add-brick" on node SC92, use invalid name for brick on node SC90: [root@SC92 log]# gluster volume add-brick dv3 10.10.60.182:/exports/182-ts4/dv3 10.10.60.90:/exports/90-ts42/dv3 10.10.60.92:/exports/92-ts4/dv3 volume add-brick: failed: Staging failed on 10.10.60.90. Error: Failed to create brick directory for brick 10.10.60.90:/exports/90-ts42/dv3. Reason : No such file or directory 3. Issue command "add-brick" on node SC92, use valid name for brick on node SC90: [root@SC92 log]# gluster volume add-brick dv3 10.10.60.182:/exports/182-ts4/dv3 10.10.60.90:/exports/90-ts4/dv3 10.10.60.92:/exports/92-ts4/dv3 volume add-brick: failed: /exports/92-ts4/dv3 is already part of a volume 4. [root@SC92 log]# gluster volume indo dv3 unrecognized word: indo (position 1) [root@SC92 log]# gluster volume info dv3 Volume Name: dv3 Type: Disperse Volume ID: 9547a2c0-1136-4fc9-915f-47d016a30484 Status: Started Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: 10.10.60.182:/exports/182-ts3/dv3 Brick2: 10.10.60.90:/exports/90-ts3/dv3 Brick3: 10.10.60.92:/exports/92-ts3/dv3 Options Reconfigured: snap-activate-on-create: enable Actual results: Expected results: Additional info:
The problem is with brick that was used for expansion. After command "add-brick" fails some attributes are left on expansion bricks. These attributes do not let use these bricks at "add-brick" command later. The volume itself is OK.
Assigning to glusterd based on comment-1
This is not a security bug, not going to fix this in 3.6.x because of http://www.gluster.org/pipermail/gluster-users/2016-July/027682.html
If the issue persists in the latest releases, please feel free to clone them