Bug 1409202
| Summary: | Warning messages throwing when EC volume offline brick comes up are difficult to understand for end user. | |||
|---|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Ashish Pandey <aspandey> | |
| Component: | disperse | Assignee: | Sunil Kumar Acharya <sheggodu> | |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
| Severity: | high | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | mainline | CC: | aflyhorse, amukherj, aspandey, bsrirama, bugs, nchilaka, rhs-bugs, sheggodu, storage-qa-internal | |
| Target Milestone: | --- | |||
| Target Release: | --- | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-3.10.0 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | 1408361 | |||
| : | 1414347 1427089 1427419 1435592 (view as bug list) | Environment: | ||
| Last Closed: | 2017-03-06 17:41:22 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1408361 | |||
| Bug Blocks: | 1414347, 1427089, 1427419, 1435592 | |||
|
Comment 1
Sunil Kumar Acharya
2017-01-03 12:06:07 UTC
REVIEW: http://review.gluster.org/16315 (cluster/ec: Fixing log message) posted (#2) for review on master by Anonymous Coward REVIEW: http://review.gluster.org/16315 (cluster/ec: Fixing log message) posted (#3) for review on master by Sunil Kumar Acharya COMMIT: http://review.gluster.org/16315 committed in master by Xavier Hernandez (xhernandez) ------ commit cc55be619830bc64544a1044f05367b8be9421bc Author: Sunil Kumar H G <sheggodu> Date: Fri Dec 30 14:11:15 2016 +0530 cluster/ec: Fixing log message Updating the warning message with details to improve user understanding. BUG: 1409202 Change-Id: I001f8d5c01c97fff1e4e1a3a84b62e17c025c520 Signed-off-by: Sunil Kumar H G <sheggodu> Reviewed-on: http://review.gluster.org/16315 Tested-by: Sunil Kumar Acharya Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Xavier Hernandez <xhernandez> I'm currently facing flooding EC errors, while "volume status" said all bricks were online. I could not fully understand what the "ec_bin()" is doing, so while waiting for this patch to be built and online, could anyone point me to a document about how this is calculated?
my log excerpt:
> [2017-02-16 03:02:40.455644] W [MSGID: 122053] [ec-common.c:116:ec_check_status] 0-mainvol-disperse-1: Operation failed on some subvolumes (up=3F, mask=3F, remaining=0, good=39, bad=6)
> [2017-02-16 03:02:40.455684] W [MSGID: 122002] [ec-common.c:71:ec_heal_report] 0-mainvol-disperse-1: Heal failed [Invalid argument]
ec_bin() converts the hexadecimal values shown in message to bit flags. For example your log currently shows: [2017-02-16 03:02:40.455644] W [MSGID: 122053] [ec-common.c:116:ec_check_status] 0-mainvol-disperse-1: Operation failed on some subvolumes (up=3F, mask=3F, remaining=0, good=39, bad=6) After applying the patch it would look something like: [--------------------------] W [MSGID: 122053] [ec-common.c:116:ec_check_status] 0-mainvol-disperse-1: Operation failed on some subvolumes (up=111111, mask=111111, remaining=000000, good=111001, bad=000110) Which indicates that 2nd and 3rd brick(count flags from right to left) in 0-mainvol-disperse-1 are bad. Thank you for your explanation. But since my volume is a 4+2 ec, shouldn't it fix the bad blocks itself? Or does it means the underlying filesystem is corrupted so it cannot do anything other than reporting it? Many thanks. Yes, the data should get healed. Please perform basic sanity check of the volume. This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/ |