Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1511767 - After detach tier start glusterd log flooded with "0-transport: EPOLLERR - disconnecting now" messages
After detach tier start glusterd log flooded with "0-transport: EPOLLERR - di...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: tier (Show other bugs)
3.3
Unspecified Unspecified
unspecified Severity unspecified
: ---
: RHGS 3.4.0
Assigned To: hari gowtham
Sweta Anandpara
:
Depends On:
Blocks: 1503137
  Show dependency treegraph
 
Reported: 2017-11-10 00:14 EST by Bala Konda Reddy M
Modified: 2018-09-04 02:40 EDT (History)
4 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-1
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-09-04 02:39:09 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 02:40 EDT

  None (edit)
Description Bala Konda Reddy M 2017-11-10 00:14:21 EST
Description of problem:
After performing detach tier start, glusterd log is flooded with   "[socket.c:2465:socket_event_handler] 0-transport: EPOLLERR - disconnecting now" for every 3 seconds.

Version-Release number of selected component (if applicable):
3.8.4-51

How reproducible:
always

Steps to Reproduce:
1. Create and start disperse volume 
2. Mount the volume and write some data
3. attach X2 replica as hot tier to volume
4. Perform detach tier start

Actual results:
Functionality wise it is working fine but glusterd log is flooded with Info messages for every 3 seconds 

Expected results:
continuous "EPOLLERR - disconnecting now" should not be seen in glusterd log. 

Additional info:
Comment 2 hari gowtham 2017-11-10 05:41:42 EST
Partial RCA:
The defrag variable being shared between tier process and detach process, doesn't cause the issue as per the first suspect. I can see it be fine in an older version of downstream (3.8.0) where the tier process and detach process share the defrag variable

with the downstream code (3.8.4-51) I can see a disconnect but I don't see a connect. may be that's why its still keeps trying to connect. 

I need to look further to understand why there is this change. and why we dont get a RPC connect with the current code (3.8.4-51).
Comment 3 hari gowtham 2017-11-13 07:55:18 EST
Hi,

The ablove issue is not reproducible with the downstream version 3.4.0.
Things work fine. Is it necessary to take a look at this with the issue being fixed in 3.4.0?

Regards,
Hari.
Comment 9 errata-xmlrpc 2018-09-04 02:39:09 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607

Note You need to log in before you can comment on or make changes to this bug.