Hide Forgot
Created attachment 1263144 [details] journalctl of oom killing the foreman-proxy process Description of problem: A customer is noticing that foreman-proxy is dying and we suspected it was due to the large amount of clients being checked by the OpenSCAP plugin. I tried to reproduce locally and saw the same issue. Version-Release number of selected component (if applicable): 6.2.7 How reproducible: 100% Steps to Reproduce: 1. Set up a system with one host, an OpenSCAP policy and attach the policy to the host. 2. Set OpenSCAP to scan the host every minute Actual results: Memory usage balloons until the OOM killer terminates foreman-proxy Expected results: Memory usage remains relatively constant. Additional info: Attaching some captures of the memory usage via ps and a journalctl log of the foreman-proxy being killed.
Created attachment 1263145 [details] captures from ps of the foreman-proxy memory usage growing
Created redmine issue http://projects.theforeman.org/issues/18926 from this bug
Upstream bug assigned to oprazak
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/18926 has been resolved.
*** Bug 1435469 has been marked as a duplicate of this bug. ***
Build: Satellite 6.2.10 snap2 Verification steps: 1) System with one host with oscap policy to scan every min. Observations: Memory usage when foreman_scap_client just started. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND foreman+ 10569 0.0 1.5 746028 194036 ? Sl 03:40 0:02 ruby /usr/share/foreman-proxy/bin/smart-proxy Memory usage after approximately 18hrs USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND foreman+ 10569 1.0 2.1 816688 264588 ? Sl Jun01 13:40 ruby /usr/share/foreman-proxy/bin/smart-proxy systemctl status foreman-proxy ● foreman-proxy.service - Foreman Proxy Loaded: loaded (/usr/lib/systemd/system/foreman-proxy.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2017-06-01 03:40:03 EDT; 22h ago Process: 10562 ExecStart=/usr/share/foreman-proxy/bin/smart-proxy (code=exited, status=0/SUCCESS) Main PID: 10569 (ruby) CGroup: /system.slice/foreman-proxy.service └─10569 ruby /usr/share/foreman-proxy/bin/smart-proxy Memory usage is relatively constant and also doesn't kill foreman-proxy
Created attachment 1284317 [details] oscap reports Arf reports were also sent to satellite successfully
Hi, Provided hotfix does not resolved the issue. # rpm -Uvh rubygem-smart_proxy_openscap-0.5.3.6-2.RHBZ1432263.el7sat.noarch.rpm Could you please help on the hotfix? Content hosts are still utilizing high memory. Is it possible to provide an hostfix for those RHEL 5 client machines where customer is using EUS subscription? Thanks, Rajan
Rajan, This BZ is specific to the satellite server itself, not memory usage on the content hosts.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1553