Bug 1664046 - Memory utilization is constantly increasing
Summary: Memory utilization is constantly increasing
Keywords:
Status: CLOSED DUPLICATE of bug 1667169
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Mohit Agrawal
QA Contact: Bala Konda Reddy M
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-07 15:23 UTC by Filip Balák
Modified: 2019-01-23 09:49 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-21 04:18:36 UTC
Embargoed:


Attachments (Terms of Use)
host dashboard (111.73 KB, image/png)
2019-01-07 15:23 UTC, Filip Balák
no flags Details

Description Filip Balák 2019-01-07 15:23:12 UTC
Created attachment 1519042 [details]
host dashboard

Description of problem:
I installed gluster, created 3 volumes, installed Web Administration and left it for 20 days running. I didn't copy any data to volumes. Now when I open dashboard for any of the hosts I see constant increase of memory usage on the node. It seems that this memory consumption is caused by glusterd.

Version-Release number of selected component (if applicable):
# rpm -qa *gluster*|sort
glusterfs-3.12.2-32.el7rhgs.x86_64
glusterfs-api-3.12.2-32.el7rhgs.x86_64
glusterfs-cli-3.12.2-32.el7rhgs.x86_64
glusterfs-client-xlators-3.12.2-32.el7rhgs.x86_64
glusterfs-events-3.12.2-32.el7rhgs.x86_64
glusterfs-fuse-3.12.2-32.el7rhgs.x86_64
glusterfs-geo-replication-3.12.2-32.el7rhgs.x86_64
glusterfs-libs-3.12.2-32.el7rhgs.x86_64
glusterfs-rdma-3.12.2-32.el7rhgs.x86_64
glusterfs-server-3.12.2-32.el7rhgs.x86_64
gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.3.x86_64
python2-gluster-3.12.2-32.el7rhgs.x86_64
tendrl-gluster-integration-1.6.3-13.el7rhgs.noarch
vdsm-gluster-4.19.43-2.3.el7rhgs.noarch

How reproducible:
Not sure. I see it on 2/2 of my configurations.

Steps to Reproduce:
1. Install gluster.
2. Create cluster with 3 volumes.
3. Install WA.
4. Import cluster.
5. Let it few days on.
6. Open host dashboard for any host and set `Last 30 days` in time selector for the dashboard.

Actual results:
There is a constant increase in memory utilization.

Expected results:
Memory utilization should be constant as there were no actions on the machines.

Additional info:

Comment 10 Filip Balák 2019-01-23 09:49:22 UTC
I don't think that this is fixed. Yesterday I updated gluster and web administration (so that I have gluster in version which is now specified in FiV of this BZ) but I still see constant increase in memory. Memory increased only by 1% but it seems that it is still growing. I will let it run at least until Friday to see if the memory growth continues but it seems that there is still a memory leak and this bugzilla shouldn't be closed.

Tested with:
tendrl-ansible-1.6.3-11.el7rhgs.noarch
tendrl-api-1.6.3-10.el7rhgs.noarch
tendrl-api-httpd-1.6.3-10.el7rhgs.noarch
tendrl-collectd-selinux-1.5.4-3.el7rhgs.noarch
tendrl-commons-1.6.3-15.el7rhgs.noarch
tendrl-gluster-integration-1.6.3-13.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-20.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-3.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-20.el7rhgs.noarch
tendrl-node-agent-1.6.3-15.el7rhgs.noarch
tendrl-notifier-1.6.3-4.el7rhgs.noarch
tendrl-selinux-1.5.4-3.el7rhgs.noarch
tendrl-ui-1.6.3-14.el7rhgs.noarch
glusterfs-3.12.2-39.el7rhgs.x86_64
glusterfs-api-3.12.2-39.el7rhgs.x86_64
glusterfs-cli-3.12.2-39.el7rhgs.x86_64
glusterfs-client-xlators-3.12.2-39.el7rhgs.x86_64
glusterfs-events-3.12.2-39.el7rhgs.x86_64
glusterfs-fuse-3.12.2-39.el7rhgs.x86_64
glusterfs-geo-replication-3.12.2-39.el7rhgs.x86_64
glusterfs-libs-3.12.2-39.el7rhgs.x86_64
glusterfs-rdma-3.12.2-39.el7rhgs.x86_64
glusterfs-server-3.12.2-39.el7rhgs.x86_64
gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch


Note You need to log in before you can comment on or make changes to this bug.