Red Hat Bugzilla – Bug 1461967
OpenScap scanner uses Excessive amount of memory and disk space on Content hosts
Last modified: 2017-07-18 16:52:14 EDT
Description of problem:
OpenScap scanner uses Excessive amount of memory and disk space on Content hosts.
There was already bug for openscap memory utilization on satellite server, but it didn't help on host machines.
We have asked Cu to upgrade the openscap packages running on host machines but this also didn't resolved the issue.
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.7 (Santiago)
# rpm -qa | grep -e rubygem-foreman -e openscap
Version-Release number of selected component (if applicable):
Satellite 6.2.9 running on RHEL 7.3
Host Machine : RHEL 6.x
Need fix on the issue at earliest.
Rajan, could you please provide more information about what exact process eat memory and what files consume disk space? It is very unlikely that the integration would be the cause, it's tiny ruby wrapper that runs openscap scan utility. It's triggered by cron so the memory should be freed just after the openscap scanner finishes. According to the case, it seems that customer was able to reproduce it when oscap was run manually without Satellite being involved.
I think you should report that against OpenSCAP under RHEL.
Also information about what policy they use might help. The size of report obviously grows with the amount of rules in the policy.
oscap process is consuming 99% of the memory & it varies irrespective of profile the Cu is using.
root 35869 99.2 13.2 11502648 8751848 ? R May19 5051:39 oscap xccdf eval --profile xccdf_example.com-rhel6-server-upstream_customized --results-arf /tmp/d20170519-35379-qq00ew/results.xml /var/lib/openscap/content/06fd06564f0d57fea6dedacb81e70c238a5e147d359d1a52287800a9e7cab04a.xml
Since that's oscap process itself, I don't think this is related to the Satellite 6 integration. Please move the bug to OpenSCAP project.
Cu is looking for the fix ASAP.
Could you please help me changing the project as suggested. I am new to the bugzilla.
Moving to RHEL/openscap since this seems to be a problem with openscap scanner.
I am afraid I am not able to reproduce the issue. Can you provide us with tarball with a reproducer? I.e. directory with datastream/xccdf, tailoring if necessary, and shell script running openscap with specific parameters? (Ideally this script could also report memory usage, so we are on the same page, comparing same numbers).
I have tried to run it on RHEL6.9, with some 5200 packages installed [basically a full install]. And oscap did not got over 15% (300 megs) of RAM.
Created attachment 1295380 [details]
ZIpped tar file containing my test scripts and tailoring file
Please find attached my test script, combined tailoring file, and a script showing the oscap invokation parameters I am using. Also is the output of vmstat which was running in the background during the scan. It shows memory on my server at 3.7 GB at the beginning of the scan and dropping to about 80 MB as the scan was running.
Thanks Jamie! Issue is reproduced consistently with the script.
this is actually issue in the datastream used, originally reported in Bug 1270329 and fixed long time ago. I won't close this bugzilla just yet, but my suggestion is to either disable rule "Ensure All Files Are Owned by a Group" (rule_no_files_unowned_by_group) or to use datastream from newer scap-security-guide.
Thanks for your help in identifying the issue. I've updated to the latest datastream and am no longer encountering the memory issue. I've uploaded the new combined file with my tailoring to our Satellite server and have it setup to scan our RHEL6 development and QA systems on Wednesday. I'll have an update on Thursday as to how that goes.
I guess no news is good news? If newer version solved your issue, I would like to close this bug as fixed in current release.
My apologies for the late response. Using the most recent datastream did resolve our issues. Our compliance scans are running fine from the Satellite server now. Feel free to close the BugZilla.
Thanks again for all your help.
*** This bug has been marked as a duplicate of bug 1270329 ***