Bug 1673447 - Capsule sync planning in foreman-tasks sometimes takes too long [NEEDINFO]
Summary: Capsule sync planning in foreman-tasks sometimes takes too long
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite 6
Classification: Red Hat
Component: Content Views
Version: 6.3.5
Hardware: Unspecified
OS: Unspecified
high
high vote
Target Milestone: 6.6.0
Assignee: Ivan Necas
QA Contact: Lai
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-07 14:55 UTC by sthirugn@redhat.com
Modified: 2019-10-22 12:47 UTC (History)
18 users (show)

Fixed In Version: tfm-rubygem-katello-3.12.0-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1684698 1726810 (view as bug list)
Environment:
Last Closed: 2019-10-22 12:47:16 UTC
Target Upstream Version:
trichard: needinfo? (inecas)


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Foreman Issue Tracker 25714 Normal Closed smart proxy syncs should use a service class 2020-05-25 11:13:53 UTC
Foreman Issue Tracker 26079 Normal Closed Move RemoveUnneededRepos out of capsule sync and into delete_orphaned_content rake task 2020-05-25 11:13:53 UTC
Red Hat Knowledge Base (Solution) 3945741 Configure None The capsule sync task's planning stage takes a long time to complete on Red Hat Satellite 6.3. 2019-02-28 11:51:49 UTC
Red Hat Product Errata RHSA-2019:3172 None None None 2019-10-22 12:47:28 UTC

Description sthirugn@redhat.com 2019-02-07 14:55:46 UTC
Description of problem:
Capsule sync planning in foreman-tasks sometimes takes too long

Version-Release number of selected component (if applicable):
Satellite 6.3.5

How reproducible:
Always

Steps to Reproduce:
This is so far reported only in large Satellite environments with tens of capsules, tens of thousands of content hosts
1. Publish a content view.
2. Promote the content view to a lifecycle environment which has tens of capsules associated with it, serving a large number of content hosts.
3. Notice in the foreman tasks that the CV sync planning takes a long time (sometimes more than 10 minutes) which causes very high capsule sync times.

Actual results:
The capsule sync task's planning takes too long - sometimes more than 10 minutes.

Expected results:
The capsule sync task's planning should finish quickly.

Additional info:

Comment 6 sthirugn@redhat.com 2019-02-11 20:19:20 UTC
Ignore the reproducer steps in the Description of the bug.  More testing revealed that the number of repos in the content view directly impacts the performance of the sync.

To reproduce this bug consistently:
1. Create a content view with a lot of repos (I had 102 repos)
2. Have two or more capsules
3. Publish/promote Content view and observe the time taken to sync to the capsule(s).  You will notice that the capsule sync time increases with the number of content view repos.

Comment 11 Mike McCune 2019-03-01 22:16:34 UTC
This bug was cloned and is still going to be included in the 6.4.3 release. It no longer has the sat-6.4.z+ flag and 6.4.3 Target Milestone Set which are now on the 6.4.z cloned bug. Please see the Clones field to track the progress of this bug in the 6.4.3 release.

Comment 16 Justin Sherrill 2019-03-18 13:27:17 UTC
Linking up the first of two issues upstream.  I'm not 100% if this will cleanly backport to 6.5, we shall see.  We may have to do a 3rd fix based on code that is halfway between 6.4 and upstream master.

Comment 18 Bryan Kearney 2019-03-18 22:06:33 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue https://projects.theforeman.org/issues/25714 has been resolved.

Comment 34 errata-xmlrpc 2019-10-22 12:47:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:3172


Note You need to log in before you can comment on or make changes to this bug.