Bug 1432263 - Memory leaks when using OpenSCAP
Summary: Memory leaks when using OpenSCAP
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: SCAP Plugin
Version: 6.2.7
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: Unspecified
Assignee: Ondřej Pražák
QA Contact: Sanket Jagtap
URL:
Whiteboard:
: 1435469 (view as bug list)
Depends On:
Blocks: 1435022
TreeView+ depends on / blocked
 
Reported: 2017-03-15 00:04 UTC by David Davis
Modified: 2021-03-11 15:03 UTC (History)
14 users (show)

Fixed In Version: rubygem-smart_proxy_openscap-0.5.3.8-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1446708 (view as bug list)
Environment:
Last Closed: 2017-06-20 17:23:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
journalctl of oom killing the foreman-proxy process (23.17 KB, text/plain)
2017-03-15 00:04 UTC, David Davis
no flags Details
captures from ps of the foreman-proxy memory usage growing (1.85 KB, text/plain)
2017-03-15 00:04 UTC, David Davis
no flags Details
oscap reports (108.48 KB, image/png)
2017-06-02 06:39 UTC, Sanket Jagtap
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 18926 0 'High' 'Closed' 'Memory leaks when using OpenSCAP' 2019-11-12 20:33:56 UTC
Red Hat Product Errata RHBA-2017:1553 0 normal SHIPPED_LIVE Satellite 6.2.10 Async Bug Release 2017-06-20 21:19:07 UTC

Description David Davis 2017-03-15 00:04:07 UTC
Created attachment 1263144 [details]
journalctl of oom killing the foreman-proxy process

Description of problem:

A customer is noticing that foreman-proxy is dying and we suspected it was due to the large amount of clients being checked by the OpenSCAP plugin. I tried to reproduce locally and saw the same issue.


Version-Release number of selected component (if applicable):

6.2.7


How reproducible:

100%


Steps to Reproduce:
1. Set up a system with one host, an OpenSCAP policy and attach the policy to the host.
2. Set OpenSCAP to scan the host every minute

Actual results:

Memory usage balloons until the OOM killer terminates foreman-proxy


Expected results:

Memory usage remains relatively constant.


Additional info:

Attaching some captures of the memory usage via ps and a journalctl log of the foreman-proxy being killed.

Comment 1 David Davis 2017-03-15 00:04:56 UTC
Created attachment 1263145 [details]
captures from ps of the foreman-proxy memory usage growing

Comment 2 Ondřej Pražák 2017-03-16 08:12:58 UTC
Created redmine issue http://projects.theforeman.org/issues/18926 from this bug

Comment 6 Satellite Program 2017-03-20 10:01:43 UTC
Upstream bug assigned to oprazak

Comment 7 Satellite Program 2017-03-20 10:01:48 UTC
Upstream bug assigned to oprazak

Comment 8 Satellite Program 2017-03-20 12:01:42 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/18926 has been resolved.

Comment 14 Ivan Necas 2017-04-05 06:37:09 UTC
*** Bug 1435469 has been marked as a duplicate of this bug. ***

Comment 18 Sanket Jagtap 2017-06-02 06:33:17 UTC
Build: Satellite 6.2.10 snap2

Verification steps:
1) System with one host with oscap policy to scan every min.

Observations:
Memory usage when foreman_scap_client just started.

  USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
        foreman+ 10569  0.0  1.5 746028 194036 ?       Sl   03:40   0:02 ruby /usr/share/foreman-proxy/bin/smart-proxy

Memory usage after approximately 18hrs

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
foreman+ 10569  1.0  2.1 816688 264588 ?       Sl   Jun01  13:40 ruby /usr/share/foreman-proxy/bin/smart-proxy

 systemctl status foreman-proxy
● foreman-proxy.service - Foreman Proxy
   Loaded: loaded (/usr/lib/systemd/system/foreman-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2017-06-01 03:40:03 EDT; 22h ago
  Process: 10562 ExecStart=/usr/share/foreman-proxy/bin/smart-proxy (code=exited, status=0/SUCCESS)
 Main PID: 10569 (ruby)
   CGroup: /system.slice/foreman-proxy.service
           └─10569 ruby /usr/share/foreman-proxy/bin/smart-proxy

Memory usage is relatively constant and also doesn't kill foreman-proxy

Comment 19 Sanket Jagtap 2017-06-02 06:39:21 UTC
Created attachment 1284317 [details]
oscap reports

Arf reports were also sent to satellite successfully

Comment 20 Rajan Gupta 2017-06-07 16:59:57 UTC
Hi,

Provided hotfix does not resolved the issue.

# rpm -Uvh rubygem-smart_proxy_openscap-0.5.3.6-2.RHBZ1432263.el7sat.noarch.rpm

Could you please help on the hotfix?

Content hosts are still utilizing high memory. 

Is it possible to provide an hostfix for those RHEL 5 client machines where customer is using EUS subscription?

Thanks,
Rajan

Comment 21 Chris Duryee 2017-06-07 17:16:41 UTC
Rajan,

This BZ is specific to the satellite server itself, not memory usage on the content hosts.

Comment 26 errata-xmlrpc 2017-06-20 17:23:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1553


Note You need to log in before you can comment on or make changes to this bug.