Bug 1609009

Summary: oscap reports not showing in Satellite when using LB capsules setup
Product: Red Hat Satellite Reporter: sthirugn <sthirugn>
Component: SCAP PluginAssignee: Ondřej Pražák <oprazak>
Status: CLOSED ERRATA QA Contact: sthirugn <sthirugn>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 6.3.1CC: egolov, jhutar, mhulan, oprazak, sthirugn, zhunting
Target Milestone: 6.4.0Keywords: Triaged
Target Release: Unused   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: tfm-rubygem-foreman_openscap-0.10.3-1,foreman-installer-1.18.0.2-1,rubygem-smart_proxy_openscap-0.6.11-1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-16 19:09:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description sthirugn@redhat.com 2018-07-26 17:28:49 UTC
Description of problem:
oscap reports not showing in Satellite when using LB capsules setup

Version-Release number of selected component (if applicable):
Satellite 6.3 with load balanced capsules

How reproducible:
Always

Steps to Reproduce:
1. Setup Satellite server and 2 capsules which are load balanced with load-balanced FQDN like `capsule.example.test`
2. Override the oscap puppet classes as shown below:
Go to Configure -> Classes -> foreman_scap_client -> Smart Class Parameter

server -> Default behavior:
   check `Override` checkbox
   Key type: string
   Default value: `capsule.example.test`

port -> Default behavior:
   check `Override` checkbox
   Key type: integer
   Default value: 9090
3. Create a hostgroup without setting values for Puppet Master, Puppet CA, OpenSCAP Capsule.
3. Register and configure a client to use oscap with the LB capsule FQDN
4. Manually trigger a oscap report like `foreman_scap_client 1`

Actual results:
oscap reports are not visible in Satellite UI - https://satellite.example.com/compliance/arf_reports

Expected results:
The oscap report should be visible in the above url.

Additional info:
1. The oscap reports are working fine when the hostgroup is associated with one of the capsules and the puppet class override is removed.
2. The oscap reports also work when changing the capsule name from a regular capsule FQDN to a load balanced FQDN in /etc/foreman_scap_client/config.yaml after doing the step 1 above.
3. The issue happens only when the puppet classes are override with a LB capsule FQDN and port number and then the client is registered.

Comment 1 sthirugn@redhat.com 2018-07-26 17:32:58 UTC
Error seen in production.log:
2018-07-26 17:29:35 d5ee8eea [app] [I] Started POST "/api/v2/compliance/arf_reports/da83e819-1418-4f60-a785-bbdc7a43b362/1/1532624714" for 192.168.121.94 at 2018-07-26 17:29:35 +0000
2018-07-26 17:29:35 d5ee8eea [app] [I] Processing by Api::V2::Compliance::ArfReportsController#create as HTML
2018-07-26 17:29:35 d5ee8eea [app] [I]   Parameters: {"logs"=>[], "digest"=>"160ddb2889af6c10ad0720e4ecbc6094d691b4dd51cf148faf24d56c72935038", "metrics"=>{"passed"=>0, "failed"=>0, "othered"=>0}, "apiv"=>"v2", "cname"=>"da83e819-1418-4f60-a785-bbdc7a43b362", "policy_id"=>"1", "date"=>"1532624714", "arf_report"=>{"logs"=>[], "digest"=>"160ddb2889af6c10ad0720e4ecbc6094d691b4dd51cf148faf24d56c72935038", "metrics"=>{"passed"=>0, "failed"=>0, "othered"=>0}}}

2018-07-26 17:29:35 d5ee8eea [app] [I] Current user: foreman_api_admin (administrator)
2018-07-26 17:29:35 d5ee8eea [app] [E] Failed to upload Arf Report, no OpenSCAP Capsule set for host client01.satellite6.example.com
2018-07-26 17:29:35 d5ee8eea [app] [I] Completed 422 Unprocessable Entity in 31ms (Views: 0.3ms | ActiveRecord: 4.4ms)

Comment 2 sthirugn@redhat.com 2018-07-26 17:43:57 UTC
The error is gone and the report started showing up when I associated a random capsule by editing the Host -> OpenSCAP Capsule.

So, perhaps for the LB capsules setup:
- The hostgroup (or the host) must be associated to one of the random capsules although the clients don't know about specific capsules they are attached to because they register using Load Balanced capsule FQDN.
- foreman_scap_client overridden as shown in the description of this bug.

If I do the above, the puppet class parameters of foreman_scap_client is modified automatically by Satellite as follows:

For server: a new `Specify matcher` item is created in Smart Class Parameter with the following rule:
fqdn = client01.satellite6.example.com
value = capsule01.satellite6.example.com (random capsule specified by me in the Hostgroup)

Due to the above automatically created rule, the LB capsule name which I overrode in the puppet class as mentioned in the bug description is not taken into account.

Only workaround I see is:
- Register the client with no OpenSCAP Capsule specified in Hostgroup.
- Then edit the host to select a random capsule in OpenSCAP Capsule dropdown.

Comment 4 Ondřej Pražák 2018-07-30 06:38:17 UTC
Created redmine issue http://projects.theforeman.org/issues/24472 from this bug

Comment 5 Ondřej Pražák 2018-07-30 06:39:28 UTC
We do not allow reports to be created for hosts without openscap capsule because it was previously causing problems (see #1334035).

There is one possible issue with the workaround. The report xml files are stored on the capsule. Assigning a random capsule may lead to users not being able to view and download the 'full report' as generated by openscap because it is stored on a different capsule than the random one that is assigned.

Comment 6 Ondřej Pražák 2018-07-30 08:23:29 UTC
Would it be feasible to use a NFS for load balanced capsules?

The problem with report files might be solved this way. The capsules store the files in location specified in /etc/foreman-proxy/setting.d/openscap.yml, it defaults to /var/lib/foreman-proxy/openscap.

Comment 7 sthirugn@redhat.com 2018-08-01 17:09:10 UTC
(In reply to Ondřej Pražák from comment #6)
> Would it be feasible to use a NFS for load balanced capsules?
> 
> The problem with report files might be solved this way. The capsules store
> the files in location specified in
> /etc/foreman-proxy/setting.d/openscap.yml, it defaults to
> /var/lib/foreman-proxy/openscap.

Can you explain it further? You mean using an NFS shared storage for /var/lib/foreman-proxy/openscap so all capsules point to the same folder?

Comment 8 Ondřej Pražák 2018-08-02 07:39:00 UTC
Yes, exactly. That way it does not matter which capsule is asked for the xml file as the capsules access the same shared folder with all the files.

Comment 9 Satellite Program 2018-08-02 15:40:45 UTC
Upstream bug assigned to oprazak

Comment 10 Satellite Program 2018-08-02 15:40:48 UTC
Upstream bug assigned to oprazak

Comment 11 Marek Hulan 2018-08-15 12:54:10 UTC
separate fix in foreman_openscap and smart_proxy_openscap is needed

Comment 12 Satellite Program 2018-08-15 14:08:15 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue https://projects.theforeman.org/issues/24504 has been resolved.

Comment 16 sthirugn@redhat.com 2018-09-28 16:31:25 UTC
This failed for me in 6.4 Snap 23. I get the same error as mentioned in Comment 1.

Comment 21 sthirugn@redhat.com 2018-10-03 22:25:51 UTC
Verified in Satellite 6.4 Snap 25.  I followed the exact same steps mentioned in the bug description, now the openscap reports uploaded without errors.

# rpm -qa | grep scap
perl-Pod-Escapes-1.04-292.el7.noarch
puppet-foreman_scap_client-0.3.16-3.el7sat.noarch
rubygem-smart_proxy_openscap-0.6.11-1.el7sat.noarch
scap-security-guide-0.1.36-9.el7_5.noarch
tfm-rubygem-foreman_openscap-0.10.3-1.el7sat.noarch
tfm-rubygem-hammer_cli_foreman_openscap-0.1.6-1.el7sat.noarch
openscap-1.2.16-8.el7_5.x86_64
openscap-scanner-1.2.16-8.el7_5.x86_64
rubygem-openscap-0.4.7-3.el7sat.noarch

# rpm -q satellite
satellite-6.4.0-15.el7sat.noarch

Comment 23 Bryan Kearney 2018-10-16 19:09:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2927