Bug 1653225 - brick memory consumed by volume is not getting released even after delete
Summary: brick memory consumed by volume is not getting released even after delete
Status: CLOSED DUPLICATE of bug 1790336
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: core
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: ---
Assignee: Mohit Agrawal
QA Contact: Nag Pavan Chilakam
Depends On:
Blocks: 1656682
TreeView+ depends on / blocked
Reported: 2018-11-26 10:15 UTC by Nag Pavan Chilakam
Modified: 2020-02-14 13:53 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1656682 (view as bug list)
Last Closed: 2020-02-14 13:53:47 UTC
Target Upstream Version:

Attachments (Terms of Use)

Description Nag Pavan Chilakam 2018-11-26 10:15:48 UTC
Description of problem:
on a brick mux setup, when we create volumes in sequence the glusterfsd memory also keeps increasing, but when we delete the volume, the memory is not getting freed up.
This can keep piling up if we do multiple volumes creates and deletes.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1.create a brick mux setup
2.create a volume say basevol
3.now creating multiple volumes and also delete them(don't delete basevol), it can be seen that the brick mem footprint only keeps increasing and never decreases.

Actual results:

Expected results:
memory should be freed up after every volume delete

Additional info:
this issue was already reported and fixed as part of BZ#1535281 
However the problem has appeared again as part of a regression introduced sometime in 3.4.x

Raising this bug to track the above issue

Comment 2 Nag Pavan Chilakam 2018-11-26 11:04:15 UTC
sosreports @  http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/nchilaka/bug.1653225

Comment 3 Sunil Kumar Acharya 2018-12-07 08:30:54 UTC
Upstream Patch:  https://review.gluster.org/21810

Note You need to log in before you can comment on or make changes to this bug.