Bug 1603082
| Summary: | Manual Index heal throws error which is misguiding when heal is triggered to heal a brick if another brick is down | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Upasana <ubansal> | |
| Component: | glusterd | Assignee: | Sanju <srakonde> | |
| Status: | CLOSED WONTFIX | QA Contact: | Bala Konda Reddy M <bmekala> | |
| Severity: | high | Docs Contact: | ||
| Priority: | low | |||
| Version: | rhgs-3.4 | CC: | amukherj, aspandey, moagrawa, nchilaka, ravishankar, rhinduja, rhs-bugs, sankarshan, sheggodu, srakonde, storage-qa-internal, ubansal, vbellur | |
| Target Milestone: | --- | Keywords: | ZStream | |
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1676812 (view as bug list) | Environment: | ||
| Last Closed: | 2019-03-25 08:24:27 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1676812 | |||
|
Comment 2
Upasana
2018-07-19 06:36:23 UTC
I discussed with Upasana, and based on futher analysis, below is the summary The heal was happening when i checked on my setup. However the error message is misguiding. Also the error message has a regression introduced Hence changing title. However, If incase Upasana, sees that the file is not healing(as she is unable to recollect at this point given that this bug was raised about 20days back), she will raise a new bug again, and also the reason behind calling a heal not happening. Also One very important note is that the error message is different between 3.3.1(latest live 3.8.4-54.15) and 3.4 latest(3.12.2-15) For the steps mentioned by Upasana, below is error message (used pkill) on 3.3.1 and 3.4.0 3.3.1: ------ Launching heal operation to perform index self heal on volume ecv has been unsuccessful on bricks that are down. Please check if all brick processes are running. 3.4.0 ------- Launching heal operation to perform index self heal on volume ecv has been unsuccessful: Commit failed on rhs-client19.lab.eng.blr.redhat.com. Please check log file for details. Also, Simple testcase, dont even have any IOs running have an ecvolume, kill brick on one node, then kill another brick on another node, and issue a heal command 3.3.1 ----------- Launching heal operation to perform index self heal on volume ecv has been unsuccessful on bricks that are down. Please check if all brick processes are running. Note: I checked with kill -9/-15 and even with brickmux on , and saw the same error message 3.4.0 -------- Launching heal operation to perform index self heal on volume dispersed has been unsuccessful: Commit failed on 10.70.35.3. Please check log file for details. pkill glusterfsd//kill 15 <glusterfsd-pid> Launching heal operation to perform index self heal on volume ecv has been unsuccessful: Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd log file for details. Upasana/Nag, Please take a look at https://review.gluster.org/#/c/glusterfs/+/22209/1//COMMIT_MSG and provide your comments. Looking at the patch, it doesn't look like we can have a detailed message claiming "brick may be down" in case of commit failure for any other reasons than what Ravi explained in some of the following scenarios: Without this patch, here are some meaningful errors: ===================================================== [root@ravi2 glusterfs]# gluster v heal testvol Launching heal operation to perform index self heal on volume testvol has been unsuccessful: Volume testvol is not started. [root@ravi2 glusterfs]# gluster v heal testvol Launching heal operation to perform index self heal on volume testvol has been unsuccessful: Self-heal-daemon is disabled. Heal will not be triggered on volume testvol [root@ravi2 glusterfs]# gluster v heal testvol Launching heal operation to perform index self heal on volume testvol has been unsuccessful: Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd log file for details. ===================================================== My take is we close this bug as can't fix. Ravi - do you agree? Makes sense to me Atin. Upasana - Please go through above two comments. We're going to close this bug with the justification mentioned in comment 37. If you happen to disagree please raise your voice now (with counter justification) otherwise this bug will be closed by end of this week. |