Bug 1628219 - High memory consumption depending on volume bricks count
Summary: High memory consumption depending on volume bricks count
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: GlusterFS
Classification: Community
Component: libgfapi
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-09-12 13:24 UTC by Vladislav
Modified: 2019-06-14 10:35 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-14 10:35:29 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Vladislav 2018-09-12 13:24:04 UTC
We've obtained very high memory usage produced by gfapi.Volume when mounted to big volume (with large bricks count). There are few experiment results, showing memory used by python process mounted to different envs:
Before mount (VSZ / RSS): 212376 / 8932
(2 nodes) 12 bricks volume : 631644 / 21440
(6 nodes) 384 bricks: 861648 / 276516
(10 nodes) 600 bricks: 987116 / 432028

Almost half GB per process just on start! And even more when actively used. As we are planning to run near 100 client nodes each with 50 processes, amount of memory needed becomes fantastic.
Is there any reason for gfapi to use so much memory to just mount the volume?
Does that mean that server-side scaling up requires corresponding scaling up of client side?

Comment 1 Shyamsundar 2018-10-23 14:54:56 UTC
Release 3.12 has been EOLd and this bug was still found to be in the NEW state, hence moving the version to mainline, to triage the same and take appropriate actions.

Comment 2 Amar Tumballi 2019-06-14 10:35:29 UTC
Vladislav, Apologies for the delay, but please notice that we do take some space per translator definition. The more number of bricks, the more memory consumed for the same.

Yes, it is a known issue for now. Hence we normally claim support upto 128 nodes/bricks only. For using it for larger counts, one need to use higher RAM for sure.

FYI - The structure which gets allocated for each xlator is https://github.com/gluster/glusterfs/blob/v6.0/libglusterfs/src/glusterfs/xlator.h#L767..L864

We won't be able to fix it in near future, as most of the logic depends on this structure. Will be marking the issue as DEFERRED.


Note You need to log in before you can comment on or make changes to this bug.