+++ This bug was initially created as a clone of Bug #1890683 +++ Solution is such that: We can store some data: smart_proxy_id repository_id started finished *As part of any capsule sync for a given repo*: SYNC_HISTORY At the start of the sync for a given repo and smart proxy, any entry in the data would be deleted A new entry would be created with the current start time At the end of the sync, the finished time would be filled in, if the data still exists (hasn't been deleted by another task) This functions as a 'two-stage' ACK of having synced successfully *As part of CV promotion* if changes are detected for a given repo during CV promotion, delete any entries for those repos in the SYNC_HISTORY table *as part of repository update* if a repository is updated, unprotected changed or download_policy changed (for inherited proxies), delete all history items for any instance of that repository *as part of content upload into a repo*, any history events for that repo are deleted. *if upstream_name for docker repos is changed*, delete all history items for that particular repostiory *at repo sync time in Library* if there are changes (or package upload/remove), we need to delete all hsitory items for that particular repository *At capsule sync time*, if a history event exists in the table for a given repo and smart proxy that 'finished', do not schedule the sync. * A full capsule sync* will delete all history events for that capsule --- Additional comment from on 2020-10-22T17:40:10Z Created from redmine issue https://projects.theforeman.org/issues/30824 --- Additional comment from on 2020-10-22T17:40:11Z Upstream bug assigned to None --- Additional comment from on 2020-11-06T20:02:12Z Moving this bug to POST for triage into Satellite since the upstream issue https://projects.theforeman.org/issues/30824 has been resolved.
Vlad, We discovered an issue upstream with this around puppet repos: https://projects.theforeman.org/issues/31469 Could you fail_QA this bug and we can work to get the fix backported? Thanks! Justin Sherrill
Hello Justin, Turning back to assigned. Can you specify the steps to reproduce more? I tried both, CV with puppet-only repo and puppet+yum repo, but the sync always succeeded. Vlad.
The way i reproduced it: 1. create a puppet repo 2. upload a puppet module to it 3. Create a content view, 4. add the puppet module to the content view (not the repo) 5. Publish the content view 6. Add Library to a Capsule 7. Sync the capsule, monitor task progress result: Task is 'paused' with failures.
Verified the issue from comment #6 on snap 4 - puppet CV is synced without errors to an empty capsule, no history record created for it. Other functionality and performance remains the same as tested before (see comment #4). New BZ filed for case when a docker-type repo is synced after the puppet CV: https://bugzilla.redhat.com/show_bug.cgi?id=1906737
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Satellite 6.8.2 Async Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5467
*** Bug 1844713 has been marked as a duplicate of this bug. ***