Bug 1155073
Summary: | Excessive logging in the self-heal daemon after a replace-brick | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Pranith Kumar K <pkarampu> |
Component: | replicate | Assignee: | Pranith Kumar K <pkarampu> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | 3.5.3 | CC: | bugs, gluster-bugs, ndevos, nsathyan, ravishankar, rhs-bugs, storage-qa-internal, vagarwal, vbellur |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.5.3 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | 1155027 | Environment: | |
Last Closed: | 2014-11-21 16:03:22 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 969353, 969355, 1151303, 1155027 | ||
Bug Blocks: |
Comment 1
Anand Avati
2014-10-21 11:30:40 UTC
Description of problem: After a replace-brick, the glustershd.log grew to 8GB in less one week. Almost all entries look like (different gfids): [2013-05-20 20:36:01.730851] W [client3_1-fops.c:473:client3_1_open_cbk] 0-web-client-1: remote operation failed: No such file or directory. Path: <gfid:4cfbc34f-9227-476d-bd23-b50613c94170> (00000000-0000-0000-0000-000000000000) [2013-05-20 20:36:01.730883] E [afr-self-heal-data.c:1321:afr_sh_data_open_cbk] 0-web-replicate-0: open of <gfid:4cfbc34f-9227-476d-bd23-b50613c94170> failed on child web-client-1 (No such file or directory) [2013-05-20 20:36:01.734192] W [client3_1-fops.c:1556:client3_1_inodelk_cbk] 0-web-client-1: remote operation failed: No such file or directory [2013-05-20 20:36:01.735905] W [client3_1-fops.c:1656:client3_1_entrylk_cbk] 0-web-client-1: remote operation failed: No such file or directory [2013-05-20 20:36:01.736096] E [afr-self-heal-entry.c:2352:afr_sh_post_nonblocking_entry_cbk] 0-web-replicate-0: Non Blocking entrylks failed for <gfid:351b61dc-29ed-4ef0-8b48-0c413b3fc370>. [2013-05-20 20:36:01.739690] W [client3_1-fops.c:1556:client3_1_inodelk_cbk] 0-web-client-1: remote operation failed: No such file or directory [2013-05-20 20:36:01.741376] W [client3_1-fops.c:1656:client3_1_entrylk_cbk] 0-web-client-1: remote operation failed: No such file or directory [2013-05-20 20:36:01.741565] E [afr-self-heal-entry.c:2352:afr_sh_post_nonblocking_entry_cbk] 0-web-replicate-0: Non Blocking entrylks failed for <gfid:a47a96d9-556c-4d5c-b428-04be961ee19a>. [2013-05-20 20:36:01.744961] W [client3_1-fops.c:1556:client3_1_inodelk_cbk] 0-web-client-1: remote operation failed: No such file or directory [2013-05-20 20:36:01.746670] W [client3_1-fops.c:473:client3_1_open_cbk] 0-web-client-1: remote operation failed: No such file or directory. Path: <gfid:d0a94a9c-80d2-4778-8f25-7f9de6bce6a8> (00000000-0000-0000-0000-000000000000) [2013-05-20 20:36:01.746702] E [afr-self-heal-data.c:1321:afr_sh_data_open_cbk] 0-web-replicate-0: open of <gfid:d0a94a9c-80d2-4778-8f25-7f9de6bce6a8> failed on child web-client-1 (No such file or directory) [2013-05-20 20:36:01.750047] W [client3_1-fops.c:1556:client3_1_inodelk_cbk] 0-web-client-1: remote operation failed: No such file or directory [2013-05-20 20:36:01.751756] W [client3_1-fops.c:473:client3_1_open_cbk] 0-web-client-1: remote operation failed: No such file or directory. Path: <gfid:f032cd09-afc3-4b72-8b62-51b4854ab37e> (00000000-0000-0000-0000-000000000000) [2013-05-20 20:36:01.751787] E [afr-self-heal-data.c:1321:afr_sh_data_open_cbk] 0-web-replicate-0: open of <gfid:f032cd09-afc3-4b72-8b62-51b4854ab37e> failed on child web-client-1 (No such file or directory) [2013-05-20 20:36:01.755104] W [client3_1-fops.c:1556:client3_1_inodelk_cbk] 0-web-client-1: remote operation failed: No such file or directory REVIEW: http://review.gluster.org/8960 (logs: Do selective logging for errnos) posted (#2) for review on release-3.5 by Pranith Kumar Karampuri (pkarampu) COMMIT: http://review.gluster.org/8960 committed in release-3.5 by Niels de Vos (ndevos) ------ commit 5fff385333db750561ffd026af09e52a8c8c16e6 Author: Pranith Kumar K <pkarampu> Date: Tue Oct 21 16:18:16 2014 +0530 logs: Do selective logging for errnos Backport of http://review.gluster.org/8918 http://review.gluster.org/8955 Problem: Just after replace-brick the mount logs are filled with ENOENT/ESTALE warning logs because the file is yet to be self-healed now that the brick is new. Fix: Do conditional logging for the logs. ENOENT/ESTALE will be logged at lower log level. Only when debug logs are enabled, these logs will be written to the logfile. BUG: 1155073 Change-Id: Icf06f2fc4f2f91e199de24a88bcb0ce9b8955ebd Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/8960 Reviewed-by: Krutika Dhananjay <kdhananj> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Niels de Vos <ndevos> The second Beta for GlusterFS 3.5.3 has been released [1]. Please verify if the release solves this bug report for you. In case the glusterfs-3.5.3beta2 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED. Packages for several distributions have been made available on [2] to make testing easier. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019359.html [2] http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.3beta2/ This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.3, please reopen this bug report. glusterfs-3.5.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/announce/2014-November/000042.html [2] http://supercolony.gluster.org/pipermail/gluster-users/ |