Bug 1773991 - Bricks are not available when volume create fails
Summary: Bricks are not available when volume create fails
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 7
Hardware: Unspecified
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1776801
TreeView+ depends on / blocked
 
Reported: 2019-11-19 11:57 UTC by Sheetal Pamecha
Modified: 2020-01-09 07:56 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
: 1776801 (view as bug list)
Environment:
Last Closed: 2020-01-09 07:56:29 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Sheetal Pamecha 2019-11-19 11:57:15 UTC
Description of problem:
If volume creation fails, glusterd reports bricks to be part of volume.

Version-Release number of selected component (if applicable):


How reproducible:
Always


Steps to Reproduce:
1.Fail to create a volume for some reason.
2 Trigger the create command again with same bricks.


Actual results:
[root@dhcp42-109 glusterfs]# gluster v info
No volumes present
[root@dhcp42-109 glusterfs]# gluster v create test-vol1 replica 3 10.70.42.109:/home/gluster/b8 10.70.42.109:/home/gluster/b7 10.70.42.109:/home/gluster/b6
volume create: test-vol1: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant configuration. Use 'force' at the end of the command if you want to override this behavior. 
[root@dhcp42-109 glusterfs]# gluster v create test-vol1 replica 3 10.70.42.109:/home/gluster/b8 10.70.42.109:/home/gluster/b7 10.70.42.109:/home/gluster/b6
volume create: test-vol1: failed: /home/gluster/b8 is already part of a volume
[root@dhcp42-109 glusterfs]# gluster v info
No volumes present


Expected results:
If no volume is created, the brick should be available

Additional info:

Comment 1 Sanju 2019-11-26 11:27:49 UTC
Hi Sheetal,

Every cli transaction goes through four phases: locking, staging, commit, unlock.

We do all kinds of validations in staging phase of transaction, once the validation fails we error out. Any modifications done in regards to the current transaction, will not be reverted as we don't have roll-back mechanism in glusterd architecture.

When volume create operation is issued, we set xattrs on the bricks. But, if the transaction fails, the xattrs will remain on the bricks. This causes subsequent volume create transactions to fail if the force option is not used.

I will send out a patch to fix this. 

Thanks,
Sanju

Comment 2 Worker Ant 2019-11-26 11:54:19 UTC
REVIEW: https://review.gluster.org/23760 (glusterd: set xaatrs afer checking the brick order) posted (#1) for review on master by Sanju Rakonde

Comment 3 Worker Ant 2019-11-26 12:24:22 UTC
REVISION POSTED: https://review.gluster.org/23760 (glusterd: set xaatrs after checking the brick order) posted (#3) for review on master by Sanju Rakonde

Comment 4 Sanju 2020-01-09 07:56:29 UTC
The patch merged more than a month ago, not sure why the bot didn't close the bug.


Note You need to log in before you can comment on or make changes to this bug.