Bug 1655854

Summary: Converting distribute to replica-3/arbiter volume fails
Product: [Community] GlusterFS Reporter: Karthik U S <ksubrahm>
Component: replicateAssignee: Karthik U S <ksubrahm>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, nchilaka
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-03-25 16:32:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1645480    

Description Karthik U S 2018-12-04 05:43:50 UTC
Description of problem:
Converting a plain distribute volume to replica-3 or arbiter configuration fails with error setting the trusted.add-brick extended attribute. This is happening even if the volume was mounted at least once.

Version-Release number of selected component (if applicable):


How reproducible:
Always


Steps to Reproduce:
1. Create a 1x1 / 2x1 volume, start and mount it
2. Try to convert it to 1x3 or 1x(2+1) / 2x3 or 2x(2+1) using the add-brick command

Actual results:
The add-brick operation will fail but the new bricks gets added to the volume and will be in offline mode.

Expected results:
The operation should succeed and the bricks should be online.

Additional info:
This is a regression caused by https://review.gluster.org/#/c/glusterfs/+/17673/

Comment 1 Worker Ant 2018-12-04 06:13:15 UTC
REVIEW: https://review.gluster.org/21791 (cluster/afr: Allow lookup on root if it is from the aux mount) posted (#1) for review on master by Karthik U S

Comment 2 Worker Ant 2018-12-18 10:31:09 UTC
REVIEW: https://review.gluster.org/21791 (cluster/afr: Allow lookup on root if it is from ADD_REPLICA_MOUNT) posted (#12) for review on master by Ravishankar N

Comment 3 Shyamsundar 2019-03-25 16:32:33 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/