Bug 1516484
Summary: | [bitrot] scrub doesn't catch file manually changed on one of bricks for disperse or arbiter volumes | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Martin Bukatovic <mbukatov> |
Component: | bitrot | Assignee: | Kotresh HR <khiremat> |
Status: | CLOSED NOTABUG | QA Contact: | Sweta Anandpara <sanandpa> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.3 | CC: | mbukatov, rhs-bugs, storage-qa-internal |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-11-25 15:48:59 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Martin Bukatovic
2017-11-22 17:43:03 UTC
Adding blocker flag. I my understanding of bitrot feature is incorrect, link to the documentation along with explanation and remove blocker flag. (In reply to Martin Bukatovic from comment #3) > I'm going to retry with distributed volume (which I haven't tried yet, as > this is not part volume configurations we test with). I retested with distrep 6x2 volume[1] and I see the same problem. [1] https://github.com/usmqe/usmqe-setup/blob/master/gdeploy_config/volume_alpha_distrep_6x2.create.conf Hi Martin, 'gluster volume bitrot <volname> scrub ondemand' reports a success --> that is supposed to be interpreted as: "Triggering the scrubber was a success on <volname>" Whether the scrubber was able to detect any problems or not is supposed to be checked with the command - "gluster volume bitrot <volname> scrub status". After step7, when we run the mentioned command, are we seeing the GFID of the file that was corrupted, under the corresponding node details, along with 'error count' set to 1? If yes, scrubber functionality is working as expected. Whether we need to fix the docs for this, can be discussed further. (In reply to Sweta Anandpara from comment #7) > 'gluster volume bitrot <volname> scrub ondemand' reports a success --> that > is supposed to be interpreted as: "Triggering the scrubber was a success on > <volname>" > > Whether the scrubber was able to detect any problems or not is supposed to > be checked with the command - "gluster volume bitrot <volname> scrub status". > > After step7, when we run the mentioned command, are we seeing the GFID of > the file that was corrupted, under the corresponding node details, along > with 'error count' set to 1? If yes, scrubber functionality is working as > expected. I have retried with arbiter volume (volume_beta_arbiter_2_plus_1x2) and can see the error being detected in the output of `scrub status`. So it works as expected. I'm sorry for the confusion. I'm going to create followup BZs for docs or other components after additional checking. Update: I was using upstream documentation when drafting the test case, which doesn't seem to contain the description you kindly provided in comment 7: * http://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#bitrot-detection * http://docs.gluster.org/en/latest/release-notes/3.9.0/#on-demand-scrubbing-for-bitrot-detection I created this upstream issue to get his fixed: https://github.com/gluster/glusterdocs/issues/303 Downstream documentation describes the feature correctly: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/ch15s02 there is no need to create downstream doc BZ. |