Bug 1575587 - Leverage MDS subvol for dht_removexattr also
Summary: Leverage MDS subvol for dht_removexattr also
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: mainline
Hardware: x86_64
OS: All
unspecified
medium
Target Milestone: ---
Assignee: Mohit Agrawal
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-07 11:44 UTC by Mohit Agrawal
Modified: 2018-10-23 15:07 UTC (History)
1 user (show)

Fixed In Version: glusterfs-5.0
Clone Of:
Environment:
Last Closed: 2018-10-23 15:07:33 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Mohit Agrawal 2018-05-07 11:44:31 UTC
Description of problem:

To avoid incorrectness of user xattr at backend leverage MDS subvol for dht_removexattr.
After introducing MDS(from patch https://review.gluster.org/#/c/15468/) to heal custom xattrs we can use the same MDS for remove custom xattr also. 
Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2018-05-07 11:56:35 UTC
REVIEW: https://review.gluster.org/19971 (cluster/dht: Leverage MDS subvol for dht_removexattr also) posted (#1) for review on master by MOHIT AGRAWAL

Comment 2 Worker Ant 2018-06-11 13:12:15 UTC
COMMIT: https://review.gluster.org/19971 committed in master by "Amar Tumballi" <amarts> with a commit message- cluster/dht: Leverage MDS subvol for dht_removexattr also

Problem: In a distributed volume situation can be arise when custom
         extended attributed are not removed from all bricks after
         stop/start or added newly brick.

Solution: To resolve the same use MDS subvol for remove xattr also

BUG: 1575587
Change-Id: I7701e0d3833e3064274cb269f26061bff9b71f50
fixes: bz#1575587
Signed-off-by: Mohit Agrawal <moagrawa>

Comment 3 Shyamsundar 2018-10-23 15:07:33 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.

glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.