Description of problem: During a volume create bricks are marked as being part of a volume. If the create fails for some reason, this is not cleaned up and the bricks are still marked as being used. the volume may not be deleted by a volume delete command and the volume create command can't use those bricks until they are cleaned up. Version-Release number of selected component (if applicable): glusterfs-server-3.4.0-0.3.alpha3.el6.x86_64 glusterfs-fuse-3.4.0-0.3.alpha3.el6.x86_64 glusterfs-3.4.0-0.3.alpha3.el6.x86_ How reproducible: every time Steps to Reproduce: 1. create a situation where a volume create will fail on one of the nodes, but succeed on the others. 2. run a create volume command on all nodes (including the one that will fail, e.g. for iptable rule) 3. try a volume delete or a volume create using the same bricks. Actual results: Expected results: a failed volume create should clean up after it's self, without the user having to use the steps at http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/ to clean up. Additional info:
I'd be interested in seeing this feature added to Gluster.
I don't think we will be working on this bug in near future. We do have a workaround of using 'force' option as part of volume create. Considering that I don't think its a high priority bug and can be closed. Feel free to reopen with proper justification.