Bug 1543585 - Client Memory Usage Drastically Increased from 3.12 to 3.13 for Replicate 3 Volumes
Summary: Client Memory Usage Drastically Increased from 3.12 to 3.13 for Replicate 3 V...
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: fuse
Version: 3.13
Hardware: Unspecified
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-02-08 18:46 UTC by Ellie
Modified: 2018-06-20 18:25 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-06-20 18:25:10 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)
Contains statedump files for before and after the test for each version, results of the test for each version, and the test script (8.14 MB, application/zip)
2018-02-08 18:46 UTC, Ellie
no flags Details

Description Ellie 2018-02-08 18:46:59 UTC
Created attachment 1393376 [details]
Contains statedump files for before and after the test for each version, results of the test for each version, and the test script

Description of problem:
On a replicate 3 volume spread across 3 servers, the client with version 3.13.2 uses significantly more memory than 3.12.5 (which is already over 1G)

Version-Release number of selected component (if applicable):
3.13.2

How reproducible:
We have seen this problem 4/4 test runs

Steps to Reproduce:
1. Create a volume that is of type replicate 3 across 3 different servers
2. Unzip the attached zip file and extract mem_test.sh onto the on of the servers in the cluster
3. Update the script to replace the variables with your own paths for mount, brick, etc (replace "/path/to..." with your locations)
4. run the script with the following command "./mem-test.sh -r 10 -t 10 -m read -s" This will create create 1 million files with 10 threads writing 100,000 files each and then the script will read each file from the mount.
5. Repeat this on a site with gluster version 3.12 and 3.13

Actual results:
The gluster_processes.csv file shows that the client for 3.13 is using ~5G of memory while in 3.12 it was using ~1G. These values can be seen in the client_mem_res column. NOTE: if the gluster_process.csv values have no unit then they are in KB

Expected results:
The memory usage should not increase that much between versions. If anything it should decrease.

Additional info:
In addition to the script, the zip file also contains
- gluster_processes_<version-number>.csv files show the results of our test runs for each version
- before_test_<version-number>.glusterdump files show the statedump for the glusterfs process BEFORE the tests start for each version
- after_test_<version-number>.glusterdump files show the statedump for the glusterfs process AFTER the tests start for each version

Comment 1 Nithya Balachandran 2018-02-13 07:14:34 UTC
Is this a pure replicate volume? Can you provide the output of gluster volume info?

Comment 2 Ellie 2018-02-13 14:04:51 UTC
(In reply to Nithya Balachandran from comment #1)
> Is this a pure replicate volume? Can you provide the output of gluster
> volume info?

Yes, this is a pure replicate volume. The result of the command is

Volume Name: <volumeName>
Type: Replicate
Volume ID: 6264f4b6-be81-4c3b-8ddd-882ac1325cde
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: <IP Address 1>:</path/to/mount/on/server1>
Brick2: <IP Address 2>:</path/to/mount/on/server2>
Brick3: <IP Address 3>:</path/to/mount/on/server3>
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: true
transport.address-family: inet
performance.io-thread-count: 64
auth.allow: <IP Address 1>,<IP Address 2>,<IP Address 3>

Comment 3 Nithya Balachandran 2018-02-14 13:39:43 UTC
Thank you. I will try to take a look at this next week.

Comment 4 Danny Lee 2018-02-22 16:02:30 UTC
Definitely take a look at the "iobuf" value under "xlator.mount.fuse.priv".  It seems quite high.  This ticket might be related to https://bugzilla.redhat.com/show_bug.cgi?id=1501146

Comment 5 Shyamsundar 2018-06-20 18:25:10 UTC
This bug reported is against a version of Gluster that is no longer maintained (or has been EOL'd). See https://www.gluster.org/release-schedule/ for the versions currently maintained.

As a result this bug is being closed.

If the bug persists on a maintained version of gluster or against the mainline gluster repository, request that it be reopened and the Version field be marked appropriately.


Note You need to log in before you can comment on or make changes to this bug.