We've obtained very high memory usage produced by gfapi.Volume when mounted to big volume (with large bricks count). There are few experiment results, showing memory used by python process mounted to different envs: Before mount (VSZ / RSS): 212376 / 8932 (2 nodes) 12 bricks volume : 631644 / 21440 (6 nodes) 384 bricks: 861648 / 276516 (10 nodes) 600 bricks: 987116 / 432028 Almost half GB per process just on start! And even more when actively used. As we are planning to run near 100 client nodes each with 50 processes, amount of memory needed becomes fantastic. Is there any reason for gfapi to use so much memory to just mount the volume? Does that mean that server-side scaling up requires corresponding scaling up of client side?
Release 3.12 has been EOLd and this bug was still found to be in the NEW state, hence moving the version to mainline, to triage the same and take appropriate actions.
Vladislav, Apologies for the delay, but please notice that we do take some space per translator definition. The more number of bricks, the more memory consumed for the same. Yes, it is a known issue for now. Hence we normally claim support upto 128 nodes/bricks only. For using it for larger counts, one need to use higher RAM for sure. FYI - The structure which gets allocated for each xlator is https://github.com/gluster/glusterfs/blob/v6.0/libglusterfs/src/glusterfs/xlator.h#L767..L864 We won't be able to fix it in near future, as most of the logic depends on this structure. Will be marking the issue as DEFERRED.