Bug 1461967

Summary: OpenScap scanner uses Excessive amount of memory and disk space on Content hosts
Product: Red Hat Enterprise Linux 6 Reporter: Rajan Gupta <rajgupta>
Component: scap-security-guideAssignee: Jan Černý <jcerny>
Status: CLOSED DUPLICATE QA Contact: BaseOS QE Security Team <qe-baseos-security>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 6.7CC: jamie.dowdy, jbhatia, kelly.brown1, mhaicman, openscap-maint, oprazak, rajgupta, rdixon, scott.steverson, szadok
Target Milestone: pre-dev-freeze   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-07-18 20:51:44 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
ZIpped tar file containing my test scripts and tailoring file none

Description Rajan Gupta 2017-06-15 18:53:16 UTC
Description of problem:
OpenScap scanner uses Excessive amount of memory and disk space on Content hosts.

There was already bug for openscap memory utilization on satellite server, but it didn't help on host machines.
https://bugzilla.redhat.com/show_bug.cgi?id=1432263

We have asked Cu to upgrade the openscap packages running on host machines but this also didn't resolved the issue.

=====================Host Details=========================
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.7 (Santiago)
  
# rpm -qa | grep -e rubygem-foreman -e openscap
openscap-utils-1.2.13-2.el6.x86_64
rubygem-foreman_scap_client-0.1.2-2.el6sat.noarch
openscap-1.2.13-2.el6.x86_64
openscap-scanner-1.2.13-2.el6.x86_64
==========================================================

Version-Release number of selected component (if applicable):
Satellite 6.2.9 running on RHEL 7.3
Host Machine : RHEL 6.x

Need fix on the issue at earliest.

Regards,
Rajan

Comment 1 Marek Hulan 2017-06-16 13:30:57 UTC
Rajan, could you please provide more information about what exact process eat memory and what files consume disk space? It is very unlikely that the integration would be the cause, it's tiny ruby wrapper that runs openscap scan utility. It's triggered by cron so the memory should be freed just after the openscap scanner finishes. According to the case, it seems that customer was able to reproduce it when oscap was run manually without Satellite being involved.

I think you should report that against OpenSCAP under RHEL.

Also information about what policy they use might help. The size of report obviously grows with the amount of rules in the policy.

Comment 2 Rajan Gupta 2017-06-16 17:23:27 UTC
oscap process is consuming 99% of the memory & it varies irrespective of profile the Cu is using.

cat ps

root     35869 99.2 13.2 11502648 8751848 ?    R    May19 5051:39 oscap xccdf eval --profile xccdf_example.com-rhel6-server-upstream_customized --results-arf /tmp/d20170519-35379-qq00ew/results.xml /var/lib/openscap/content/06fd06564f0d57fea6dedacb81e70c238a5e147d359d1a52287800a9e7cab04a.xml

Comment 3 Marek Hulan 2017-06-19 07:28:11 UTC
Since that's oscap process itself, I don't think this is related to the Satellite 6 integration. Please move the bug to OpenSCAP project.

Comment 4 Rajan Gupta 2017-06-19 10:01:18 UTC
Hi Marek,

Cu is looking for the fix ASAP.
Could you please help me changing the project as suggested. I am new to the bugzilla.

Regards,
Rajan

Comment 5 Ondřej Pražák 2017-06-20 13:57:42 UTC
Moving to RHEL/openscap since this seems to be a problem with openscap scanner.

Comment 10 Marek Haicman 2017-07-07 14:48:03 UTC
Hello Rajan,
I am afraid I am not able to reproduce the issue. Can you provide us with tarball with a reproducer? I.e. directory with datastream/xccdf, tailoring if necessary, and shell script running openscap with specific parameters? (Ideally this script could also report memory usage, so we are on the same page, comparing same numbers).

I have tried to run it on RHEL6.9, with some 5200 packages installed [basically a full install]. And oscap did not got over 15% (300 megs) of RAM.

Comment 11 Jamie Dowdy 2017-07-07 18:53:08 UTC
Created attachment 1295380 [details]
ZIpped tar file containing my test scripts and tailoring file

Please find attached my test script, combined tailoring file, and a script showing the oscap invokation parameters I am using.   Also is the output of vmstat which was running in the background during the scan.  It shows memory on my server at 3.7 GB at the beginning of the scan and dropping to about 80 MB as the scan was running.

Comment 12 Marek Haicman 2017-07-10 10:38:59 UTC
Thanks Jamie! Issue is reproduced consistently with the script.

Comment 13 Marek Haicman 2017-07-10 19:26:40 UTC
Hello,
this is actually issue in the datastream used, originally reported in Bug 1270329 and fixed long time ago. I won't close this bugzilla just yet, but my suggestion is to either disable rule "Ensure All Files Are Owned by a Group" (rule_no_files_unowned_by_group) or to use datastream from newer scap-security-guide.

Comment 15 Jamie Dowdy 2017-07-11 14:52:43 UTC
Thanks for your help in identifying the issue.  I've updated to the latest datastream and am no longer encountering the memory issue.   I've uploaded the new combined file with my tailoring to our Satellite server and have it setup to scan our RHEL6 development and QA systems on Wednesday.  I'll have an update on Thursday as to how that goes.

Comment 16 Marek Haicman 2017-07-18 19:38:28 UTC
Hello Jamie,
I guess no news is good news? If newer version solved your issue, I would like to close this bug as fixed in current release.

Comment 17 Jamie Dowdy 2017-07-18 20:44:02 UTC
My apologies for the late response.  Using the most recent datastream did resolve our issues.  Our compliance scans are running fine from the Satellite server now.  Feel free to close the BugZilla. 

Thanks again for all your help. 

Jamie.

Comment 18 Marek Haicman 2017-07-18 20:51:44 UTC

*** This bug has been marked as a duplicate of bug 1270329 ***