Bug 1758784

Summary: [Tracker #1757420] memory leak in glusterfsd with error from iot_workers_scale function
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Raghavendra Talur <rtalur>
Component: rhgs-server-containerAssignee: Raghavendra Talur <rtalur>
Status: CLOSED ERRATA QA Contact: Rachael <rgeorge>
Severity: urgent Docs Contact:
Priority: urgent    
Version: ocs-3.11CC: ccalhoun, knakai, knarra, madam, pdhange, puebele, rcyriac, rgeorge, rhs-bugs
Target Milestone: ---Keywords: ZStream
Target Release: OCS 3.11.z Batch Update 4   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: rhgs-server-container-3.11.4-14 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-10-30 12:32:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1757420    
Bug Blocks:    

Description Raghavendra Talur 2019-10-05 15:40:25 UTC
Description of problem:

creation of new pvc causing all existing bricks going offline.

We are seeing multiple messages in brick logs

~~~

Message from brick logs proving that iot_workers_scale function is called regularly:
        ./bricks/var-lib-heketi-mounts-vg_a297e8e6c7ee27ef50ecdb8d275b5b1e-brick_59a437f1dda29b1f864cef14bb79969a-brick.log:[2019-09-30 10:33:37.520821] D [MSGID: 0] [io-threads.c:822:__iot_workers_scale] 12-vol_b0008eb67e38f0353db3de8d7ac8d696-io-threads: scaled threads to 3 (queue_size=3/3)

~~~

Version-Release number of selected component (if applicable):
OCS 3.11

How reproducible:
In customer environment frequently


Actual results:
All the brick processes going offline causing all volumes going down

Expected results:
New pvc creation should not cause all existing bricks going offline

Additional info: In further comments

Comment 12 errata-xmlrpc 2019-10-30 12:32:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3257