+++ This bug was initially created as a clone of Bug #1380655 +++ Description of problem: ======================= when volume mount servers glusterd is down, getting the below continuous errors messages in the volume mount log for every 3 seconds. <START> [2016-09-30 08:45:54.917489] E [glusterfsd-mgmt.c:1922:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: 10.70.43.190 (Transport endpoint is not connected) [2016-09-30 08:45:54.917542] I [glusterfsd-mgmt.c:1939:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2016-09-30 08:45:57.924521] E [glusterfsd-mgmt.c:1922:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: 10.70.43.190 (Transport endpoint is not connected) [2016-09-30 08:45:57.924585] I [glusterfsd-mgmt.c:1939:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2016-09-30 08:46:00.931708] E [glusterfsd-mgmt.c:1922:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: 10.70.43.190 (Transport endpoint is not connected) [2016-09-30 08:46:00.931781] I [glusterfsd-mgmt.c:1939:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2016-09-30 08:46:03.938789] E [glusterfsd-mgmt.c:1922:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: 10.70.43.190 (Transport endpoint is not connected) [2016-09-30 08:46:03.938857] I [glusterfsd-mgmt.c:1939:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers <END> Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.8.4-2 How reproducible: ================= Always Steps to Reproduce: =================== 1. Have one or two nodes cluster 2. Create replica volume ( i used 7 x 2 = 14 ) 3. Fuse mount the volume 4. Stop glusterd in the node from where volume is mounted. 5. Check the volume mount log. Actual results: =============== getting continuous error messages for every 3 seconds. Expected results: ================= There should be some control on error throwing or some other solution. 3 seconds frequency will consume lot of log storage if volume mount servers is down for any known reasons. Additional info: --- Additional comment from Red Hat Bugzilla Rules Engine on 2016-09-30 05:06:52 EDT --- This bug is automatically being proposed for the current release of Red Hat Gluster Storage 3 under active development, by setting the release flag 'rhgs‑3.2.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. --- Additional comment from Ravishankar N on 2016-09-30 05:17:51 EDT --- Changing component to core since this is not relevant to FUSE per se and the behaviour can be observed on gNFS mounts too. --- Additional comment from Byreddy on 2016-10-18 02:42:04 EDT --- This issue is not there in the last GA build. --- Additional comment from Byreddy on 2016-10-26 00:02:25 EDT --- @Atin, Any reason why we moved this bug out of 3.2.0 ? As per the Commen3, this issue newly introduced in the 3.2.0 build and this issue will consume lot of volume mount log storage if vol file server is down for any known reasons. --- Additional comment from Atin Mukherjee on 2016-10-26 00:36:14 EDT --- Apologies Byreddy, I complete missed out comment 3, will be moving it back to 3.2.0 for further analysis and thanks for catching it! --- Additional comment from Atin Mukherjee on 2016-10-26 07:33:21 EDT --- upstream mainline patch http://review.gluster.org/15732 posted for review. --- Additional comment from Mohit Agrawal on 2016-10-27 01:20:09 EDT --- Hi, Messages are coming (mgmt_rpc_notify) continuously in this build because one check was removed before execute the code block in case of RPC_CLNT_DISCONNECT from this patch (http://review.gluster.org/#/c/13002/). To reduce the frequency of messages change gf_log to GF_LOG_OCCASIONALLY. Regards Mohit Agrawal --- Additional comment from Red Hat Bugzilla Rules Engine on 2016-11-07 06:44:49 EST --- This bug is automatically being provided 'pm_ack+' for the release flag 'rhgs‑3.2.0', the current release of Red Hat Gluster Storage 3 under active development, having been appropriately marked for the release, and having been provided ACK from Development and QE If the 'blocker' flag had been proposed/set on this BZ, it has now been unset, since the 'blocker' flag is not valid for the current phase of RHGS 3.2.0 development --- Additional comment from Red Hat Bugzilla Rules Engine on 2016-11-08 00:05:14 EST --- Since this bug has been approved for the RHGS 3.2.0 release of Red Hat Gluster Storage 3, through release flag 'rhgs-3.2.0+', and through the Internal Whiteboard entry of '3.2.0', the Target Release is being automatically set to 'RHGS 3.2.0'
REVIEW: http://review.gluster.org/15823 (glusterfsd: Continuous errors are getting in mount logs while glusterd is down) posted (#1) for review on release-3.9 by MOHIT AGRAWAL (moagrawa)
This bug is getting closed because GlusterFS-3.9 has reached its end-of-life [1]. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please open a new bug against the newer release. [1]: https://www.gluster.org/community/release-schedule/