Bug 1651143 - Accumulated ARF reports are being re-sent to Satellite Server forever
Summary: Accumulated ARF reports are being re-sent to Satellite Server forever
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: SCAP Plugin
Version: 6.4.0
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: Released
Assignee: Ondřej Pražák
QA Contact: Jameer Pathan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-19 09:54 UTC by Pavel Moravec
Modified: 2022-03-13 16:07 UTC (History)
8 users (show)

Fixed In Version: tfm-rubygem-foreman_openscap-0.11.5, tfm-rubygem-foreman_openscap-0.11.2, rubygem-smart_proxy_openscap-0.7.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-05-14 19:57:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 25502 0 Normal Closed Better error handling for reports that already exist in foreman when sending from spool 2021-01-22 11:56:16 UTC
Foreman Issue Tracker 25514 0 Normal Closed Better error handling when sending reports from spool - the proxy part 2021-01-22 11:56:16 UTC
Foreman Issue Tracker 26372 0 Normal Closed When sending reports from spool for deleted policy, 404 is returned instead of 422 2021-01-22 11:56:16 UTC
Red Hat Knowledge Base (Solution) 3705531 0 None None None 2018-12-11 14:47:30 UTC

Description Pavel Moravec 2018-11-19 09:54:49 UTC
Description of problem:
If proxy sending SCAP reports fails to send a report, it stores it into /var/spool/foreman-proxy/openscap/arf directory and re-tries in next run (invoked via */30 cronjob by default).

But there are multiple scenarios leaving some reports orphaned in the spool forever, e.g. if we:
- delete a host (any sending returns 404)
- delete a policy (any sending returns 404)
- restart proxy just after it successfully sent the report but before it deleted it from the spool - any subsequent sending fails with ISE 500 "ActiveRecord::RecordInvalid: Validation failed: Reported at has already been taken" on app/models/foreman_openscap/arf_report.rb:109:in `create_arf'
- there can be other scenarios unknown ATM

So there are scenarios leaving stalled/orphaned ARF reports in the spool - forever. Each and every execution of /etc/cron.d/rubygem-smart_proxy_openscap fails in sending them, each and every 30 minutes (by default).

We should have a cleanup policy (ideally configurable in foreman-proxy settings) that if a report is older than X days, it is not (again) attempted to be sent but it is deleted - ideally with some (warning?) log to proxy log.


Version-Release number of selected component (if applicable):
Sat 6.4
rubygem-smart_proxy_openscap-0.6.11-1.el7sat.noarch


How reproducible:
100%


Steps to Reproduce:
1. attach few OpenSCAP policies to few hosts
2. remove some Host (but keep the VM running for a while)
3. remove some policy
4. optionally, try to restart foreman-proxy after sending a report but before deleting it from spool (or mimic this by copying some report from spool, letting it processed successfully and then copying it back to the spool)
5. wait several half-hours and see what arf reports are (re)sent to Satellite:

tail -f /var/log/httpd/foreman-ssl_access_ssl.log | grep "POST /api/v2/compliance/arf_reports/"


Actual results:
5. shows the same reports are sent again and again, with timestamp (unix TS since The Epoch) at the end of URI stalled in past (even years in past, if you will be enough patient :) ).

The requests will fail with 404 or 500 error response code every time, forever.


Expected results:
5. shows too old reports are not further attempted to be sent, after some number of (failing) attempts.


Additional info:

Comment 1 Ondřej Pražák 2018-11-19 14:52:40 UTC
Connecting redmine issue http://projects.theforeman.org/issues/25502 from this bug

Comment 6 Ondřej Pražák 2019-01-31 07:58:07 UTC
Connecting redmine issue http://projects.theforeman.org/issues/25514 from this bug

Comment 9 Ondřej Pražák 2019-03-25 07:35:34 UTC
Connecting redmine issue http://projects.theforeman.org/issues/26372 from this bug

Comment 11 Jameer Pathan 2019-04-12 09:27:39 UTC
verified

@satellite 6.5.0 snap 23
@tfm-rubygem-foreman_openscap-0.11.5-1.el7sat.noarch
@rubygem-smart_proxy_openscap-0.7.1-1.el7sat.noarch

steps:
1. attached few OpenSCAP policies to few hosts
2. have some reports in spool
3. remove some Hosts 
4. remove some policies

observation:

- If we delete host, policy from satellite then reports in spool
also gets deleted.
- If reports from spool successfully gets uploaded to satellite, then it's 
deleted from spool.
- If policy is not found on satellite during uploading reports from spool 
then error "Policy with id 4 not found." is shown in production.log
- If host is not found on satellite during uploading reports from spool 
then error "Could not find host identified by: 2cdf25d0-eafd-44e8-9e86-74918001f5e9" is shown in production.log
- after deleting scap policy or deleting host itself from satellite, host still sends reports satellite/capsule.
filed separate bugzilla for this issue.
https://bugzilla.redhat.com/show_bug.cgi?id=1699260

Comment 12 Bryan Kearney 2019-05-14 19:57:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:1222


Note You need to log in before you can comment on or make changes to this bug.