Bug 1490203 - [GSS] glusterfs process consume huge memory on both server and client node
Summary: [GSS] glusterfs process consume huge memory on both server and client node
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: hari gowtham
QA Contact: Bala Konda Reddy M
URL:
Whiteboard:
: 1497108 (view as bug list)
Depends On: 1496379 1497084 1497108
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-09-11 05:29 UTC by WenhanShi
Modified: 2023-09-14 04:07 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1496379 (view as bug list)
Environment:
Last Closed: 2018-10-30 08:38:57 UTC
Embargoed:


Attachments (Terms of Use)
error shown (92.91 KB, image/jpeg)
2017-09-17 12:14 UTC, liuwei
no flags Details
valgrind information (224.15 KB, application/zip)
2017-09-25 11:14 UTC, liuwei
no flags Details
The new valgrind log (8.22 KB, application/x-7z-compressed)
2017-09-26 08:08 UTC, liuwei
no flags Details

Description WenhanShi 2017-09-11 05:29:40 UTC
Description of problem:
Client is using FUSE to access gluster Tier volume. When they executed their batch application to write files to this Tier volume, over 30G memory will be consumed on both server and client node. 

Version-Release number of selected component (if applicable):
RHGS 3.2

How reproducible:
Every time On customer environment

Steps to Reproduce:
1. run customer batch application to write files to gluster volume
2.
3.

Actual results:
Memory exhaustion will happen

Expected results:
No memory exhaustion should happen


Additional info:

Comment 6 liuwei 2017-09-17 12:14:36 UTC
Created attachment 1326952 [details]
error shown

Comment 14 Raghavendra G 2017-09-18 11:54:05 UTC
> One interesting thing is the HUGE number of inodes in use - 785929 active inodes. Do you think there could be so many number of inodes/files/directories are actively used at the time this statedump was taken? Note that these inodes might also represent directories even though a file is accessed in the form of dentry structure. Also, the access need not be just by user space applications. It could also be due to internal daemons like tier promotion/demotion, heal, quotad etc.

Comment 18 liuwei 2017-09-25 11:14:40 UTC
Created attachment 1330479 [details]
valgrind information

Comment 20 liuwei 2017-09-26 08:08:30 UTC
Created attachment 1330901 [details]
The new valgrind log

Comment 25 hari gowtham 2017-09-27 06:44:11 UTC
Hi,

One more thing to add.

We need the server statedump for the volume CCIFL as well as that was the volume reported to have high consumption. 

Thanks,
Hari.

Comment 27 Atin Mukherjee 2017-10-05 04:05:55 UTC
*** Bug 1497108 has been marked as a duplicate of this bug. ***

Comment 36 Amar Tumballi 2018-10-30 08:38:57 UTC
There is a proposal of a patch which fixed the leak, and also no activity on the case! We are trying to fix all the known leaks in upstream with ASan based tests etc.

With these information closing the bug, will reopen if there is more activity here! (The proposal to close this bug happened 5 months back).

Comment 37 Red Hat Bugzilla 2023-09-14 04:07:39 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.