Bug 1580269 - [Remove-brick+Rename] Failure count shows zero though there are file migration failures
Summary: [Remove-brick+Rename] Failure count shows zero though there are file migratio...
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: mainline
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
Assignee: Susant Kumar Palai
QA Contact:
Whiteboard: dht-data-loss
Depends On: 1577051
TreeView+ depends on / blocked
Reported: 2018-05-21 06:36 UTC by Susant Kumar Palai
Modified: 2018-10-23 15:09 UTC (History)
9 users (show)

Fixed In Version: glusterfs-5.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1577051
Last Closed: 2018-10-23 15:09:43 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)

Comment 1 Worker Ant 2018-05-21 06:39:41 UTC
REVIEW: https://review.gluster.org/20044 (cluster/dht: Increase failure count for lookup failure in remove-brick op) posted (#1) for review on master by Susant Palai

Comment 2 Worker Ant 2018-05-28 07:31:13 UTC
COMMIT: https://review.gluster.org/20044 committed in master by "N Balachandran" <nbalacha@redhat.com> with a commit message- cluster/dht: Increase failure count for lookup failure in remove-brick op

An entry from readdirp might get renamed just before migration leading to
lookup failures. For such lookup failure, remove-brick process does not
see any increment in failure count. Though there is a warning message
after remove-brick commit for the user to check in the decommissioned brick
for any files those are not migrated, it's better to increase the failure count
so that user can check in the decommissioned bricks for files before commit.

Note: This can result in false negative cases for rm -rf interaction with
remove-brick op, where remove-brick shows non-zero failed count, but the
entry was actually deleted by user.

Fixes :bz#1580269
Change-Id: Icd1047ab9edc1d5bfc231a1f417a7801c424917c
fixes: bz#1580269
Signed-off-by: Susant Palai <spalai@redhat.com>

Comment 3 Shyamsundar 2018-10-23 15:09:43 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.

glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html
[2] https://www.gluster.org/pipermail/gluster-users/

Note You need to log in before you can comment on or make changes to this bug.