Bug 1459807

Summary: Apply Errata to a Content Host via incremental C.V. has to wait till Capsule sync of the C.V. finishes
Product: Red Hat Satellite Reporter: Pavel Moravec <pmoravec>
Component: Errata ManagementAssignee: Jonathon Turel <jturel>
Status: CLOSED ERRATA QA Contact: Stephen Wadeley <swadeley>
Severity: high Docs Contact:
Priority: medium    
Version: 6.2.10CC: ahumbe, anazmy, bbuckingham, bkearney, jcallaha, rbobek, spetrosi
Target Milestone: 6.9.0Keywords: Reopened, Triaged
Target Release: Unused   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-04-21 13:11:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Pavel Moravec 2017-06-08 08:57:27 UTC
Description of problem:
Having a Content Host registered to a Capsule and assigned to a Content View. If one wants to apply an errata outside the C.V. current content, incremental C.V. version is created (and published via Capsule (repo) sync to the Capsule automatically). So far so good.

But when I click to "Apply Errata to Content Hosts Immediately after publishing" checkbox, BulkAction to install the errata is scheduled independently to the C.V. (repo) sync to the Capsule, and thus the attempt to install that errata to the Content Host is made before the Capsule gets synced content with the errata. And thus the errata apply fails.

This limits usage of fast patching systems registered via Capsule, in general.

The BulkAction / errata apply task simply must wait for completion of all C.V. repos synces to all Capsules. This "wait for all these tasks completions" can be weakened e.g. to check synces of Capsules that have a Content Host we apply the errata to, only.

Or maybe even better: move the BulkAction from the current place to the end of Actions::Katello::ContentView::CapsuleGenerateAndSync - for the relevant hosts of given Capsule only, of course.


Version-Release number of selected component (if applicable):
Sat 6.2.9


How reproducible:
100%


Steps to Reproduce:
1. Create & publish (& promote) a C.V. with some bigger repo (e.g. RHEL7 base one) with exclude filter to purge away an errata (I used "skip any errata released after 01-01-2017).
2. Have a Content Host registered via an external Capsule and associated to the Content View.
3. Find some errata from the repo that isnt in the Content View (in my case, it was RHSA-2017:1365). Try to apply it to this Content Host and check on the "Apply Errata to Content Hosts Immediately after publishing" (cf. with docs [1]).
4. Wait untill the task finishes.
5. Check if the errata was applied to the Content Host and check the Host's /var/log/messages


[1] https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/host_configuration_guide/sect-red_hat_satellite-host_configuration_guide-viewing_and_applying_errata-applying_errata_to_content_hosts

Actual results:
5. Errata apply fails, /var/log/messages contain something like:

Jun  8 10:43:34 pmoravec-rhel72 goferd: [INFO][worker-0] gofer.rmi.dispatcher:603 - call: Content.install() sn=154d65fa-835c-4e10-9ff5-aee076c29ec1 data={u'task_id': u'67a79b8e-910e-4e8d-b587-1cb980e617ff', u'consumer_id': u'248666cd-a06e-4bf4-8314-8f0e988af579'}
Jun  8 10:43:37 pmoravec-rhel72 goferd: [ERROR][worker-0] pulp.agent.lib.dispatcher:65 - handler failed
Jun  8 10:43:37 pmoravec-rhel72 goferd: [ERROR][worker-0] pulp.agent.lib.dispatcher:65 - Traceback (most recent call last):
Jun  8 10:43:37 pmoravec-rhel72 goferd: [ERROR][worker-0] pulp.agent.lib.dispatcher:65 -   File "/usr/lib/python2.7/site-packages/pulp/agent/lib/dispatcher.py", line 61, in install
Jun  8 10:43:37 pmoravec-rhel72 goferd: [ERROR][worker-0] pulp.agent.lib.dispatcher:65 -     _report = handler.install(conduit, units, dict(options))
Jun  8 10:43:37 pmoravec-rhel72 goferd: [ERROR][worker-0] pulp.agent.lib.dispatcher:65 -   File "/usr/lib/python2.7/site-packages/pulp_rpm/handlers/rpm.py", line 100, in install
Jun  8 10:43:37 pmoravec-rhel72 goferd: [ERROR][worker-0] pulp.agent.lib.dispatcher:65 -     details = pkg.install(names)
Jun  8 10:43:37 pmoravec-rhel72 goferd: [ERROR][worker-0] pulp.agent.lib.dispatcher:65 -   File "/usr/lib/python2.7/site-packages/pulp_rpm/handlers/rpmtools.py", line 158, in install
Jun  8 10:43:37 pmoravec-rhel72 goferd: [ERROR][worker-0] pulp.agent.lib.dispatcher:65 -     raise caught
Jun  8 10:43:37 pmoravec-rhel72 goferd: [ERROR][worker-0] pulp.agent.lib.dispatcher:65 - InstallError: 0:nss-3.28.4-1.2.el7_3.x86_64: No package(s) available to install

(if one compares timestamps when Actions::Katello::ContentView::CapsuleGenerateAndSync was finished and when Actions::Katello::Host::Erratum::ApplicableErrataInstall was started, then Capsule sync is finished after Errata install is started - and thats the wrong)


Expected results:
5. Errata is applied, no error in /var/log/messages on the client, Errata task is triggered after Capsule sync finishes.


Additional info:
It would be worth testing this BZ against a "mixed" hosts: i.e. apply an errata to 6 hosts where:
- one is registered to Sat and in Default C.V.
- one is registered to Sat and in non-default C.V.
- one is registered to a Caps in Default C.V.
- one is registered to a Caps in non-default C.V.
- one is registered to another Caps in Default C.V.
- one is registered to the other Caps in non-default (even different from above) C.V.

just to see foreman tasks are scheduled in really good ordering in any use case.

Comment 4 Bryan Kearney 2019-01-04 13:18:20 UTC
The Satellite Team is attempting to provide an accurate backlog of bugzilla requests which we feel will be resolved in the next few releases. We do not believe this bugzilla will meet that criteria, and have plans to close it out in 1 month. This is not a reflection on the validity of the request, but a reflection of the many priorities for the product. If you have any concerns about this, feel free to contact Red Hat Technical Support or your account team. If we do not hear from you, we will close this bug out. Thank you.

Comment 5 Pavel Moravec 2019-01-24 14:26:37 UTC
It should be sufficient to insert into (on 6.4):

/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.7.0.42/app/lib/actions/katello/content_view/incremental_updates.rb :

            if hosts.any? && !content[:errata_ids].blank?
              errata = ::Katello::Erratum.with_identifiers(content[:errata_ids])
              hosts = hosts.where(:id => ::Katello::Host::ContentFacet.with_applicable_errata(errata).pluck(:host_id))
              plan_action(::Actions::Katello::CapsuleContent::Sync,
                          capsule_content.capsule,
                          :environment_id => ...
                          :repository_id => ...
                          :skip_metadata_check => ...)

              plan_action(::Actions::BulkAction, ::Actions::Katello::Host::Erratum::ApplicableErrataInstall, hosts, content[:errata_ids])
            end


Add there the ::Actions::Katello::CapsuleContent::Sync - checking if it is sufficient and how to fetch input for the task.

Then, Caps sync will be invoked twice, but the 2nd one will be almost no-op so no big issue.

Comment 6 Pavel Moravec 2019-01-24 14:34:58 UTC
(or more trivially from coding point of view, call plan_action(Katello::Repository::CapsuleSync, view, repository)  though it will fire _all_ Capsule synces :( )

Comment 7 Pavel Moravec 2019-01-24 22:05:42 UTC
Patch for 6.4.1 :

--- /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.7.0.42/app/lib/actions/katello/content_view/incremental_updates.rb.orig	2019-01-24 20:37:19.910159837 +0100
+++ /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.7.0.42/app/lib/actions/katello/content_view/incremental_updates.rb	2019-01-24 23:03:27.948047273 +0100
@@ -8,19 +8,43 @@ module Actions
           old_new_version_map = {}
           output_for_version_ids = []
 
+          Rails.logger.info("PavelM: hosts=#{hosts.inspect}, content=#{content.inspect}")
+          if hosts.any? && !content[:errata_ids].blank?
+            errata = ::Katello::Erratum.with_identifiers(content[:errata_ids])
+            hosts = hosts.where(:id => ::Katello::Host::ContentFacet.with_applicable_errata(errata).pluck(:host_id))
+            smart_proxies = (hosts.map { |host| host.subscription_facet_attributes.registered_through unless host.subscription_facet_attributes.nil? }).uniq
+            smart_proxies = (smart_proxies.map { |proxy| SmartProxy.find_by_name(proxy) unless proxy.nil? or SmartProxy.find_by_name(proxy).default_capsule? }).compact
+          end
+
           sequence do
             concurrence do
               version_environments.each do |version_environment|
                 version = version_environment[:content_view_version]
+                environment = version_environment[:environments]
                 if version.content_view.composite?
                   fail _("Cannot perform an incremental update on a Composite Content View Version (%{name} version version %{version}") %
                     {:name => version.content_view.name, :version => version.version}
                 end
 
-                action = plan_action(ContentViewVersion::IncrementalUpdate, version,
-                            version_environment[:environments], :resolve_dependencies => dep_solve, :content => content, :description => description)
-                old_new_version_map[version] = action.new_content_view_version
-                output_for_version_ids << {:version_id => action.new_content_view_version.id, :output => action.output}
+                sequence do
+                  action = plan_action(ContentViewVersion::IncrementalUpdate, version,
+                              environment, :resolve_dependencies => dep_solve, :content => content, :description => description)
+                  Rails.logger.info("PavelM: version=#{version.inspect}, env=#{environment.inspect}")
+                  concurrence do
+                    if defined? smart_proxies
+                      environment.each do |env|
+                        Rails.logger.info("PavelM: env=#{env.inspect}, version.content_view=#{version.content_view.inspect}, smart_proxies=#{smart_proxies.inspect}")
+                        if ::Katello::CapsuleContent.sync_needed?(env)
+                          plan_action(ContentView::CapsuleSync,
+                                      version.content_view,
+                                      env)
+                        end
+                      end
+                    end
+                  end
+                  old_new_version_map[version] = action.new_content_view_version
+                  output_for_version_ids << {:version_id => action.new_content_view_version.id, :output => action.output}
+                end
               end
             end
 
@@ -29,8 +53,6 @@ module Actions
             end
 
             if hosts.any? && !content[:errata_ids].blank?
-              errata = ::Katello::Erratum.with_identifiers(content[:errata_ids])
-              hosts = hosts.where(:id => ::Katello::Host::ContentFacet.with_applicable_errata(errata).pluck(:host_id))
               plan_action(::Actions::BulkAction, ::Actions::Katello::Host::Erratum::ApplicableErrataInstall, hosts, content[:errata_ids])
             end
             plan_self(:version_outputs => output_for_version_ids)



The only noticed drawback: Actions::Katello::ContentView::IncrementalUpdates task ends up with warning due to Actions::BulkAction for Caps sync skipped, despite that task completed successfully.

Comment 8 Bryan Kearney 2019-02-07 11:57:25 UTC
Thank you for your interest in Satellite 6. We have evaluated this request, and while we recognize that it is a valid request, we do not expect this to be implemented in the product in the foreseeable future. This is due to other priorities for the product, and not a reflection on the request itself. We are therefore closing this out as WONTFIX. If you have any concerns about this, please do not reopen. Instead, feel free to contact Red Hat Technical Support. Thank you.

Comment 9 Pavel Moravec 2019-02-07 12:35:04 UTC
(In reply to Bryan Kearney from comment #8)
> Thank you for your interest in Satellite 6. We have evaluated this request,
> and while we recognize that it is a valid request, we do not expect this to
> be implemented in the product in the foreseeable future. This is due to
> other priorities for the product, and not a reflection on the request
> itself. We are therefore closing this out as WONTFIX. If you have any
> concerns about this, please do not reopen. Instead, feel free to contact Red
> Hat Technical Support. Thank you.

Bryan,
could you reconsider fixing this in 6.6 or later? There is a pending PR:

https://github.com/Katello/katello/pull/7946

for it.

Comment 10 Bryan Kearney 2019-02-07 21:08:23 UTC
Identitied a flaw in the process. I check for upstream patches before marking them for deletion, but not after. I will work to address this in future rounds. Re-opening this.

-- bk

Comment 11 Bryan Kearney 2019-11-05 20:42:43 UTC
Upstream bug assigned to pmoravec

Comment 12 Bryan Kearney 2019-11-05 20:42:45 UTC
Upstream bug assigned to pmoravec

Comment 14 Pavel Moravec 2020-09-25 07:45:49 UTC
Re-assigning to jturel who has a pending upstream PR.

Comment 15 Bryan Kearney 2020-09-29 20:05:48 UTC
Moving this bug to POST for triage into Satellite since the upstream issue https://projects.theforeman.org/issues/25921 has been resolved.

Comment 16 Brad Buckingham 2020-11-13 18:59:12 UTC
Fix is in Satellite 6.9 SNAP 1 with tfm-rubygem-katello-3.18.0-0.1.rc1.el7sat.noarch

Comment 22 errata-xmlrpc 2021-04-21 13:11:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Satellite 6.9 Release), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:1313