Bug 1569198 - bitrot scrub status does not show the brick where the object (file) is corrupted
Summary: bitrot scrub status does not show the brick where the object (file) is corrupted
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: bitrot
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Raghavendra Bhat
QA Contact:
bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-18 19:18 UTC by Raghavendra Bhat
Modified: 2018-06-20 18:05 UTC (History)
2 users (show)

Fixed In Version: glusterfs-v4.1.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-06-20 18:05:09 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Raghavendra Bhat 2018-04-18 19:18:26 UTC
Description of problem:

Currently the existing status command for bitrot scrub shows the list of the corrupted objects (files as of now) for that volume in each of the node

"gluster volume bitrot <volume name> scrub status" gets the GFIDs that are corrupt and on which Host this happens.

The preferred way of finding out what file maps to that GFID is using "getfattr" (assuming all Gluster and mount options were set as described in the link).

The problem is that "getfattr" does not tell what Brick contains the corrupt file. It only gives the path according to the FUSE mount. So it difficult to find which brick the corrupted object belongs to.

If we assume that every brick is on a distinct host then we have no problem because "bitrot status" gave us the hostname. So we can infer what brick is meant. But in general you can't assume there are not several bricks per host, right?

With "find" it is possible to find the correct brick. But the command is possibly expensive.

Version-Release number of selected component (if applicable):


How reproducible:

Always

Steps to Reproduce:
1. Create a gluster volume and start it.
2. Enable bitrot detection on the volume
3. Create some files
4. Simulate bitrot by editing a file directly in the backend brick (instead of the mount point)
5. Run ondemand scrub to start scrubbing
6. Run "gluster volume bitrot <volume name> scrub status" to get the status

Actual results:

The brick where the object (i.e. the gfid) is corrupted is not shown. Only the node is shown.

Expected results:


Additional info:

Comment 1 Worker Ant 2018-04-18 19:39:07 UTC
REVIEW: https://review.gluster.org/19901 (features/bitrot: show the corresponding brick for the corrupted objects) posted (#1) for review on master by Raghavendra Bhat

Comment 2 Worker Ant 2018-04-20 05:11:21 UTC
COMMIT: https://review.gluster.org/19901 committed in master by "Amar Tumballi" <amarts> with a commit message- features/bitrot: show the corresponding brick for the corrupted objects

Currently with "gluster volume bitrot <volume name> scrub status" command
the corrupted objects of a node are shown. But to what brick that corrupted
object belongs to is not shown. Showing the brick of the corrupted object
will help in situations where a node hosts multiple bricks of a volume.

Change-Id: I7fbdea1e0072b9d3487eb10757468bc02d24df21
fixes: bz#1569198
Signed-off-by: Raghavendra Bhat <raghavendra>

Comment 3 Shyamsundar 2018-06-20 18:05:09 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.