Description of problem:
Currently the existing status command for bitrot scrub shows the list of the corrupted objects (files as of now) for that volume in each of the node
"gluster volume bitrot <volume name> scrub status" gets the GFIDs that are corrupt and on which Host this happens.
The preferred way of finding out what file maps to that GFID is using "getfattr" (assuming all Gluster and mount options were set as described in the link).
The problem is that "getfattr" does not tell what Brick contains the corrupt file. It only gives the path according to the FUSE mount. So it difficult to find which brick the corrupted object belongs to.
If we assume that every brick is on a distinct host then we have no problem because "bitrot status" gave us the hostname. So we can infer what brick is meant. But in general you can't assume there are not several bricks per host, right?
With "find" it is possible to find the correct brick. But the command is possibly expensive.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Create a gluster volume and start it.
2. Enable bitrot detection on the volume
3. Create some files
4. Simulate bitrot by editing a file directly in the backend brick (instead of the mount point)
5. Run ondemand scrub to start scrubbing
6. Run "gluster volume bitrot <volume name> scrub status" to get the status
The brick where the object (i.e. the gfid) is corrupted is not shown. Only the node is shown.
REVIEW: https://review.gluster.org/19901 (features/bitrot: show the corresponding brick for the corrupted objects) posted (#1) for review on master by Raghavendra Bhat
COMMIT: https://review.gluster.org/19901 committed in master by "Amar Tumballi" <email@example.com> with a commit message- features/bitrot: show the corresponding brick for the corrupted objects
Currently with "gluster volume bitrot <volume name> scrub status" command
the corrupted objects of a node are shown. But to what brick that corrupted
object belongs to is not shown. Showing the brick of the corrupted object
will help in situations where a node hosts multiple bricks of a volume.
Signed-off-by: Raghavendra Bhat <firstname.lastname@example.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.
glusterfs-v4.1.0 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.