Bug 1516651

Summary: During large scale remote execution it is hard to cancel the job
Product: Red Hat Satellite Reporter: sbadhwar
Component: Remote ExecutionAssignee: Adam Ruzicka <aruzicka>
Status: CLOSED ERRATA QA Contact: Peter Ondrejka <pondrejk>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 6.3.0CC: aruzicka, bbuckingham, cduryee, inecas, jhutar, lzap, mmccune, psuriset, sbadhwar, zhunting
Target Milestone: UnspecifiedKeywords: Triaged
Target Release: Unused   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: scale_lab
Fixed In Version: tfm-rubygems-dynflow-0.8.34 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-02-21 16:54:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1370139    

Description sbadhwar 2017-11-23 08:09:04 UTC
Description of problem:
While running large scale Remote Execution (currently we ran Remote Execution on 30k hosts), it becomes very hard to cancel or abort the job until all the hosts have been queued for the execution.
This situation becomes more grave when tasks are taking too long to get queued.

Version-Release number of selected component (if applicable):


How reproducible: Everytime


Steps to Reproduce:
1. Run Remote Execution on Large Number of hosts (we did with 30k)
2. Try to cancel the job while it is running and not all hosts are queued for execution
3. The job doesn't cancels till all the hosts are queued.

Actual results:

The job doesn't cancels till all hosts are queued for execution


Expected results:

The job should be cancelled no matter even if some of the hosts are still remaining to be queued


Additional info:

Comment 1 Adam Ruzicka 2017-11-23 08:14:18 UTC
Connecting upstream issue to this BZ.

Comment 2 Satellite Program 2017-11-23 09:25:19 UTC
Upstream bug assigned to aruzicka

Comment 3 Ivan Necas 2017-11-23 09:50:36 UTC
There is a related BZ that resolving this issue will help moving forward as well https://bugzilla.redhat.com/show_bug.cgi?id=1370139, not closing as duplicate, as this one talks directly about job cancelling and it's worth verifying this specific case separately.

Comment 5 Satellite Program 2017-12-14 17:26:08 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/21716 has been resolved.

Comment 6 Ivan Necas 2017-12-14 17:35:14 UTC
Upstream release here https://github.com/theforeman/foreman-packaging/pull/1983

Comment 7 Peter Ondrejka 2017-12-21 10:33:11 UTC
Verified in Sat 6.3 snap 29, when the job is canceled in progress, it finishes scheduling the current batch of hosts (100) and the rest of tasks is not started.

Comment 8 Satellite Program 2018-02-21 16:54:37 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA.
> > 
> > For information on the advisory, and where to find the updated files, follow the link below.
> > 
> > If the solution does not work for you, open a new bug report.
> > 
> > https://access.redhat.com/errata/RHSA-2018:0336