Bug 1463964
Summary: | heal info shows root directory as "Possibly undergoing heal" when heal is pending and heal deamon is disabled | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> |
Component: | replicate | Assignee: | Ravishankar N <ravishankar> |
Status: | CLOSED ERRATA | QA Contact: | Vijay Avuthu <vavuthu> |
Severity: | medium | Docs Contact: | |
Priority: | high | ||
Version: | rhgs-3.3 | CC: | amukherj, rhinduja, rhs-bugs, sheggodu, storage-qa-internal |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | RHGS 3.4.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | rebase | ||
Fixed In Version: | glusterfs-3.12.2-1 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-09-04 06:32:36 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1318895, 1467268, 1467269, 1467272 | ||
Bug Blocks: | 1503134 |
Description
Nag Pavan Chilakam
2017-06-22 07:14:35 UTC
Update: ======== Build used: glusterfs-server-3.12.2-6.el7rhgs.x86_64 Verified below scenarios for both 1 * 2 and 2 * 3 1. create volume, disable heal deamon 2. create a zerobyte file f1 under root mount 3. kill b1 4. append data to f1 and create a new file f2 5. bring b1 up 6. check heal info Didn't see any "Possibly undergoing heal" messages for root dir # gluster vol heal 12 info Brick 10.70.35.61:/bricks/brick1/b0 Status: Connected Number of entries: 0 Brick 10.70.35.174:/bricks/brick1/b1 /f1 /f2 / Status: Connected Number of entries: 3 # # gluster vol heal 23 info Brick 10.70.35.61:/bricks/brick0/testvol_distributed-replicated_brick0 Status: Connected Number of entries: 0 Brick 10.70.35.174:/bricks/brick0/testvol_distributed-replicated_brick1 Status: Connected Number of entries: 0 Brick 10.70.35.17:/bricks/brick0/testvol_distributed-replicated_brick2 Status: Connected Number of entries: 0 Brick 10.70.35.163:/bricks/brick0/testvol_distributed-replicated_brick3 Status: Connected Number of entries: 0 Brick 10.70.35.136:/bricks/brick0/testvol_distributed-replicated_brick4 /f1 /f2 / Status: Connected Number of entries: 3 Brick 10.70.35.214:/bricks/brick0/testvol_distributed-replicated_brick5 /f1 /f2 / Status: Connected Number of entries: 3 # Changing status to verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607 |