Bug 1516651 - During large scale remote execution it is hard to cancel the job
Summary: During large scale remote execution it is hard to cancel the job
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Remote Execution
Version: 6.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: Unspecified
Assignee: Adam Ruzicka
QA Contact: Peter Ondrejka
URL:
Whiteboard: scale_lab
Depends On:
Blocks: 1370139
TreeView+ depends on / blocked
 
Reported: 2017-11-23 08:09 UTC by sbadhwar
Modified: 2019-04-01 20:27 UTC (History)
10 users (show)

Fixed In Version: tfm-rubygems-dynflow-0.8.34
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-02-21 16:54:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 21716 0 Normal Closed Using polling with bulk actions breaks cancelling 2020-06-03 13:46:19 UTC

Description sbadhwar 2017-11-23 08:09:04 UTC
Description of problem:
While running large scale Remote Execution (currently we ran Remote Execution on 30k hosts), it becomes very hard to cancel or abort the job until all the hosts have been queued for the execution.
This situation becomes more grave when tasks are taking too long to get queued.

Version-Release number of selected component (if applicable):


How reproducible: Everytime


Steps to Reproduce:
1. Run Remote Execution on Large Number of hosts (we did with 30k)
2. Try to cancel the job while it is running and not all hosts are queued for execution
3. The job doesn't cancels till all the hosts are queued.

Actual results:

The job doesn't cancels till all hosts are queued for execution


Expected results:

The job should be cancelled no matter even if some of the hosts are still remaining to be queued


Additional info:

Comment 1 Adam Ruzicka 2017-11-23 08:14:18 UTC
Connecting upstream issue to this BZ.

Comment 2 Satellite Program 2017-11-23 09:25:19 UTC
Upstream bug assigned to aruzicka

Comment 3 Ivan Necas 2017-11-23 09:50:36 UTC
There is a related BZ that resolving this issue will help moving forward as well https://bugzilla.redhat.com/show_bug.cgi?id=1370139, not closing as duplicate, as this one talks directly about job cancelling and it's worth verifying this specific case separately.

Comment 5 Satellite Program 2017-12-14 17:26:08 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/21716 has been resolved.

Comment 6 Ivan Necas 2017-12-14 17:35:14 UTC
Upstream release here https://github.com/theforeman/foreman-packaging/pull/1983

Comment 7 Peter Ondrejka 2017-12-21 10:33:11 UTC
Verified in Sat 6.3 snap 29, when the job is canceled in progress, it finishes scheduling the current batch of hosts (100) and the rest of tasks is not started.

Comment 8 Satellite Program 2018-02-21 16:54:37 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA.
> > 
> > For information on the advisory, and where to find the updated files, follow the link below.
> > 
> > If the solution does not work for you, open a new bug report.
> > 
> > https://access.redhat.com/errata/RHSA-2018:0336


Note You need to log in before you can comment on or make changes to this bug.