Bug 1806823 - glusterfsd consume big amount of memory in case of multiple client IO, after finish, memory does not drop
Summary: glusterfsd consume big amount of memory in case of multiple client IO, after ...
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: io-threads
Version: 7
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-02-25 05:42 UTC by zhou lin
Modified: 2020-03-12 12:21 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: ---
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-12 12:21:47 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
export glusterfsd statudump after stop fio and remove fio files (47.50 KB, text/plain)
2020-02-25 05:42 UTC, zhou lin
no flags Details

Description zhou lin 2020-02-25 05:42:37 UTC
Created attachment 1665557 [details]
export glusterfsd statudump after stop fio and remove fio files

Description of problem:
doing IO from 10 glusterfs clients at the same time, glusterfsd memory consumption grows, after io finish the glusterfsd memory does not drop anymore.

Version-Release number of selected component (if applicable):
glusterfs7

How reproducible:


Steps to Reproduce:
1.begin IO with fio on 10 glusterfs clients at the same time
2.observe memory usage of glusterfsd
3.finish IO, and remove IO files

Actual results:
glusterfsd memory keeps to be high
# ps -aux | grep glusterfsd| grep export
root        2643 84.8 20.2 2358828 412908 ?      Ssl  05:32   5:48 /usr/sbin/glusterfsd -s dbm-0.local --volfile-id export.dbm-0.local.mnt-bricks-export-brick -p /var/run/gluster/vols/export/dbm-0.local-mnt-bricks-export-brick.pid -S /var/run/gluster/bab7bc2d0256ba6a.socket --brick-name /mnt/bricks/export/brick -l /var/log/glusterfs/bricks/mnt-bricks-export-brick.log --xlator-option *-posix.glusterd-uuid=17da950d-c7a1-4a01-b139-20f8fb801346 --process-name brick --brick-port 53954 --xlator-option export-server.listen-port=53954 --xlator-option transport.socket.bind-address=dbm-0.local

[root@dbm-0:/root]
# ps -T -p 2643
    PID    SPID TTY          TIME CMD
   2643    2643 ?        00:00:00 glusterfsd
   2643    2644 ?        00:00:00 glfs_timer
   2643    2645 ?        00:00:00 glfs_sigwait
   2643    2646 ?        00:00:00 glfs_memsweep
   2643    2647 ?        00:00:00 glfs_sproc0
   2643    2648 ?        00:00:00 glfs_sproc1
   2643    2649 ?        00:00:00 glusterfsd
   2643    2650 ?        00:00:36 glfs_epoll000
   2643    2651 ?        00:00:37 glfs_epoll001
   2643    3046 ?        00:00:00 glfs_idxwrker
   2643    3047 ?        00:00:14 glfs_iotwr000
   2643    3050 ?        00:00:00 glfs_clogecon
   2643    3051 ?        00:00:00 glfs_clogd000
   2643    3054 ?        00:00:00 glfs_clogd001
   2643    3055 ?        00:00:00 glfs_clogd002
   2643    3060 ?        00:00:00 glfs_posix_rese
   2643    3061 ?        00:00:00 glfs_posixhc
   2643    3062 ?        00:00:00 glfs_posixctxja
   2643    3063 ?        00:00:00 glfs_posixfsy
   2643    3081 ?        00:00:20 glfs_rpcrqhnd
   2643    3229 ?        00:00:20 glfs_rpcrqhnd
   2643    3334 ?        00:00:14 glfs_iotwr001
   2643    3335 ?        00:00:14 glfs_iotwr002
   2643    3992 ?        00:00:14 glfs_iotwr003
   2643    3995 ?        00:00:14 glfs_iotwr004
   2643    4004 ?        00:00:14 glfs_iotwr005
   2643    4005 ?        00:00:14 glfs_iotwr006
   2643    4006 ?        00:00:14 glfs_iotwr007
   2643    4016 ?        00:00:14 glfs_iotwr008
   2643    4017 ?        00:00:14 glfs_iotwr009
   2643    4019 ?        00:00:14 glfs_iotwr00a
   2643    4020 ?        00:00:14 glfs_iotwr00b
   2643    4021 ?        00:00:14 glfs_iotwr00c
   2643    4022 ?        00:00:14 glfs_iotwr00d
   2643    4023 ?        00:00:14 glfs_iotwr00e
   2643    4024 ?        00:00:14 glfs_iotwr00f




Expected results:

after remove fio created files, the glusterfsd memory size will go down to before.
Additional info:

Comment 1 zhou lin 2020-02-25 05:44:48 UTC
# cat mnt-bricks-export-brick.2643.dump.1582602052
DUMP-START-TIME: 2020-02-25 03:40:52.167274

[mallinfo]
mallinfo_arena=372768725
mallinfo_ordblks=795
mallinfo_smblks=323
mallinfo_hblks=17
mallinfo_hblkhd=17350656
mallinfo_usmblks=0
mallinfo_fsmblks=33184
mallinfo_uordblks=1752549
mallinfo_fordblks=371016176
mallinfo_keepcost=124592


why mallinfo_fordblks is so big?
this is a permernant issue from my test, after do fio, the glusterfsd memory never go down.

Comment 2 Worker Ant 2020-03-12 12:21:47 UTC
This bug is moved to https://github.com/gluster/glusterfs/issues/880, and will be tracked there from now on. Visit GitHub issues URL for further details


Note You need to log in before you can comment on or make changes to this bug.