Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1296695 - Overall higher memory usage on 5.5.2.0
Overall higher memory usage on 5.5.2.0
Status: CLOSED ERRATA
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Performance (Show other bugs)
5.5.0
Unspecified Unspecified
high Severity high
: GA
: 5.5.4
Assigned To: dmetzger
Alex Krzos
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-01-07 16:46 EST by Alex Krzos
Modified: 2016-05-31 09:40 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-05-31 09:40:36 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1101 normal SHIPPED_LIVE CFME 5.5.4 bug fixes and enhancement update 2016-05-31 13:40:10 EDT

  None (edit)
Description Alex Krzos 2016-01-07 16:46:27 EST
Description of problem:
Overall higher memory usage on 5.5.2.0 in comparison to 5.5.0.13-2 for specific memory baselining scenarios:

VMware Environments tested with C&U show the following amounts of more appliance memory used when compared to 5.5.0.13-2:
Small (100vms) ~532MiB
Medium (1k vms) ~355MiB
Large (3k vms) ~326MiB

Version-Release number of selected component (if applicable):
5.5.2.0

How reproducible:
Single set of tests ran and compared against 5.5.0.13-2

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:


Analysis on a per worker RSS Memory measurement shows some notable memory gains for the following workers:

MiqScheduleWorker - 60-30MiB - Separately bz-ed for occasionally exceeding memory threshold (BZ1296192)
MiqWebServiceWorker - 44-48MiB
MiqEmsRefreshCoreWorker - 24-27MiB
ManageIQ::Providers::Vmware::InfraManager::EventCatcher - 26-32MiB
MiqEventHandler - 25-32MiB
ManageIQ::Providers::Vmware::InfraManager::MetricsCollectorWorker - 19-38MiB
MiqEmsMetricsProcessorWorker - 18-37MiB
MiqReportingWorker - 22-30MiB
evm_server.rb - 19-30MiB
Comment 4 Joe Rafaniello 2016-05-24 14:28:01 EDT
Note, concurrent-ruby gem was added in cfme 5.5.2.0 because sprockets 3.5.0+ was released just before we built cfme 5.5.2.0:

From: https://rubygems.org/gems/sprockets/versions
3.5.2 - December 8, 2015 (68.5 KB)
3.5.1 - December 5, 2015 (68.5 KB)
3.5.0 - December 3, 2015 (68.5 KB)

Here's a comparison of Gemfile.lock from 5.5.0.13 -> 5.5.2.0.
Summary:  sprockets 3.4.1 -> 3.5.2, added concurrent-ruby, fog-sakuracloud 1.4.0 -> 1.7.3, linux_admin 0.12.1 -> 0.13.0.  The rest are patch releases.


--- a/Gemfile.lock55013
+++ b/Gemfile.lock5520

-    amq-protocol (2.0.0)
+    amq-protocol (2.0.1)

-    autoprefixer-rails (6.1.2)
+    autoprefixer-rails (6.2.1)

-    css_splitter (0.4.2)
+    css_splitter (0.4.3)

+    concurrent-ruby (1.0.0)

-    dalli (2.7.4)
+    dalli (2.7.5)

-    fog-sakuracloud (1.4.0)
+    fog-sakuracloud (1.7.3)

-    gettext_i18n_rails (1.3.1)
+    gettext_i18n_rails (1.3.2)

-    iniparse (1.4.1)
+    iniparse (1.4.2)

-    linux_admin (0.12.1)
+    linux_admin (0.13.0)

-    rails-assets-bootstrap-select (1.7.5)
+    rails-assets-bootstrap-select (1.7.7)

-    rdoc (4.2.0)
+    rdoc (4.2.1)
+      json (~> 1.4)

-    responders (2.1.0)
-      railties (>= 4.2.0, < 5)
+    responders (2.1.1)
+      railties (>= 4.2.0, < 5.1)

-    sass (3.4.19)
+    sass (3.4.20)

-    secure_headers (2.4.3)
+    secure_headers (2.4.4)

-    sprockets (3.4.1)
+    sprockets (3.5.2)
+      concurrent-ruby (~> 1.0)

-  linux_admin (~> 0.12.1)
+  linux_admin (~> 0.13.0)

-  ovirt (~> 0.7.0)
+  ovirt (~> 0.7.1)

+  sprockets-rails (< 3.0.0)
Comment 5 Joe Rafaniello 2016-05-24 14:28:58 EDT
Other differences:

RHEL 7.1 -> 7.2

-glibc-2.17-78.el7.x86_64
-glibc-common-2.17-78.el7.x86_64
+glibc-2.17-106.el7_2.1.x86_64
+glibc-common-2.17-106.el7_2.1.x86_64

-kernel-3.10.0-229.20.1.el7.x86_64
-kernel-tools-3.10.0-229.20.1.el7.x86_64
-kernel-tools-libs-3.10.0-229.20.1.el7.x86_64
+kernel-3.10.0-327.3.1.el7.x86_64
+kernel-tools-3.10.0-327.3.1.el7.x86_64
+kernel-tools-libs-3.10.0-327.3.1.el7.x86_64
Comment 6 Joe Rafaniello 2016-05-24 14:29:48 EDT
Comment 5 was also comparing 5.5.0.13 -> 5.5.2.0
Comment 7 Joe Rafaniello 2016-05-24 14:36:46 EDT
Sprockets 3.5.0 added concurrent-ruby as a dependency here: https://github.com/rails/sprockets/pull/193
Comment 8 Joe Rafaniello 2016-05-24 18:05:57 EDT
bundler also changed:

-   1.10.6
+   1.11.2
Comment 9 Joe Rafaniello 2016-05-25 15:29:59 EDT
5.5.2.0 changed from bundler 1.10.6 -> 1.11.2 which consumes more memory.

bundler 1.10.6, 131MB (cfme 5.5.0.13)
bundler 1.11.2, 173-177MB (cfme 5.5.2.0)
bundler 1.12.0, 173-174 MB
bundler 1.12.1, 121-124 MB
bundler 1.12.2, 121-123 MB
bundler 1.12.4, 122-124 MB


We should use bundler 1.12.1+ for less memory usage.
5.5.4.0 has bundler 1.11.2
5.5.4.1 has bundler 1.12.3
Comment 10 Joe Rafaniello 2016-05-25 15:30:46 EDT
This was fixed in 5.5.4.1, the latest zstream.
Comment 12 Oleg Barenboim 2016-05-25 15:36:41 EDT
Alex - so is this an issue on cfme 5.6 or only some versions of cfme 5.5?  Reason I am asking is that this BZ has the cfme-5.6 flag.
Comment 13 Alex Krzos 2016-05-25 17:31:16 EDT
As a short note I have seen reduced memory usage in both my idle and C&U workloads with 5.5.4.1 and 5.5.4.2:


Idle appliance (Used MiB from /proc/meminfo):

5.5.4.2	5.5.4.1	5.5.4.0	5.5.3.4 5.5.2.0 5.5.0.13-2
3123.86	3116.29	3600.39	3577.49 3500.66 2968.36

We can see memory has begun to be reduced starting with 5.5.4.1/5.5.4.2

Inventory + C&U comparison with a medium VMware provider (1000vms / 500 online) over 4 hours.

5.5.4.1	5.5.3.4 5.5.0.13-2
5039.45	5517.71 4774.13

With the C&U workload we can see a reduction of almost 500MiB of used memory for the same workload with the same provider for the same amount of time.

In response to Oleg, here is my idle 5.6 data:

5.6.0.7	5.6.0.6	5.6.0.5
3461.38	3497.67	3421.04


And Same C&U scenario as above:

5.6.0.7	5.6.0.6
5354.16	5241.89
Comment 14 Joe Rafaniello 2016-05-25 20:55:20 EDT
Maybe the 5.5.4.1 -> 5.6.0.7 memory jump warrants a new bug.  The fix for 5.5.2.0 (the description of this bug) has already be done and doesn't affect 5.6.0.  Therefore, 5.5.4.1 -> 5.6.0.7 has a different cause and fix for the memory jump.
Comment 15 dmetzger 2016-05-26 08:48:09 EDT
Closing this ticket as the bundler issue discussed starting in comment #9 resolved the large memory jump.
Comment 17 Alex Krzos 2016-05-26 20:06:21 EDT
The original basis for this BZ depended on 30minutes of test data on memory usage.  I am closing this bug due to 5.5.4.1 and 5.5.4.2 having reduced memory usage in idle and c&u scenarios when compared to 5.5.2.0.  I will open a separate bug for 5.6 vs 5.5 to continue to track attempts to reduce overall memory usage of cfme.

With longer time frames:

Idle - 1 hour
5.5.0.13-2 - 2968.36
5.5.2.0 - 3500.66
5.5.3.4 - 3577.49
5.5.4.0 - 3600.39
5.5.4.1 - 3116.29
5.5.4.2 - 3123.86

The high water mark was 5.5.4.0

C&U VMware Medium - 4 hours
5.5.0.13-2 - 4774.13
5.5.2.0 - The data for this test run is only 2hours and should not be compared
5.5.3.4 - 5517.71
5.5.4.1 - 5039.45


Closing this bug as we have now reduced idle memory usage from 5.5.2.0 to 5.5.4.2/5.5.4.1 by approximately 370-380MiB. 
When comparing 5.5.4.2/5.5.4.1 to 5.5.0.13-2, 5.5.0.13-2 is still ~150MiB less memory.  

When comparing 5.5.4.1 to 5.5.3.4 with a 4 hour C&U scenario with a Medium VMware provider we are saving ~478MiB.  It still remains higher memory usage when comparing 5.5.4.1 to 5.5.0.13-2 by 265.32MiB
Comment 19 errata-xmlrpc 2016-05-31 09:40:36 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1101

Note You need to log in before you can comment on or make changes to this bug.