This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 507957 - dagman: schedd both accepts and starves more DAG jobs than MAX_JOBS_RUNNING
dagman: schedd both accepts and starves more DAG jobs than MAX_JOBS_RUNNING
Product: Red Hat Enterprise MRG
Classification: Red Hat
Component: grid (Show other bugs)
x86_64 Linux
medium Severity high
: 1.3
: ---
Assigned To: Pete MacKinnon
MRG Quality Engineering
Depends On: 526480
  Show dependency treegraph
Reported: 2009-06-24 16:21 EDT by Pete MacKinnon
Modified: 2010-07-22 13:17 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2010-07-22 13:17:16 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Pete MacKinnon 2009-06-24 16:21:28 EDT
Description of problem:

schedd configured with MAX_JOBS_RUNNING=200
500 concurrent DAG jobs submitted

However the condor_dagman jobs count against the total jobs running. So, it seems schedd will happily accept the 500 dag submissions and THEN realize it is busted well after those dags have submitted their own jobs. Thus, schedd gets stuck.

Version-Release number of selected component (if applicable):
$CondorVersion: 7.3.2 Jun  8 2009 BuildID: RH-7.3.2-0.2.el5 PRE-RELEASE-UWCS $
$CondorPlatform: X86_64-LINUX_RHEL5 $

How reproducible:

Steps to Reproduce:
1. log into ha-schedd (mrg27) as bigmonkey
2. cd dagman
3. ./
4. wait for 500 submits to complete
5. condor_q -dag until it summary stops updating running, idle count
Actual results:

Job status doesn't change:
[16:08:03][bigmonkey@mrg27:~/dagman]$ condor_q -dag | tail -1
678 jobs; 178 idle, 500 running, 0 held
[16:09:19][bigmonkey@mrg27:~/dagman]$ condor_q -dag | tail -1
678 jobs; 178 idle, 500 running, 0 held

Job outputs doesn't change:
[15:57:04][bigmonkey@mrg27:~/dagman]$ ls -1 /tmp/dag_test/out/* | wc -l
[16:05:41][bigmonkey@mrg27:~/dagman]$ ls -1 /tmp/dag_test/out/* | wc -l

Expected results:

Dunno...schedd puts new top-level dagmans into Held until under limit? Maybe introduce a configurable buffer amount which kicks in and holds new jobs as we approach % of limit?

Additional info:
Comment 1 Pete MacKinnon 2009-07-02 11:44:48 EDT
Perhaps this is intended to be managed by combinations of?
    -maxidle <number>   (Maximum number of idle nodes to allow)
    -maxjobs <number>   (Maximum number of jobs ever submitted at once)

Will experiment...
Comment 2 Pete MacKinnon 2009-09-25 09:46:54 EDT
Examining the submit code, -maxidle/-maxjobs provides no relief since they only count per dagman. Have to look at schedd to figure out if there is a way the submit client can get the "overall" picture at time of submission.
Comment 3 Pete MacKinnon 2009-09-30 11:05:53 EDT
New BZ created (526480) will promote workaround of capacity planning prior to large multi-dag deployments and setting an appropriately high MAX_JOBS_RUNNING (estimate # of concurrent dags X # of nodes for largest dag).

Next level of analysis on this BZ will focus on why schedd appears to not be doing bookkeeping of ALL jobs (dagman+nodes). Will also try to solicit input/opinions from UW.
Comment 4 Pete MacKinnon 2010-04-12 17:09:18 EDT
Referenced upstream at
Ultimate resolution in that ticket.

Write a KB article that will be given to Lana for reference in next user Guide.

Consult Mike Cressman or John Thomas for KB article tips.
Comment 6 Pete MacKinnon 2010-06-04 16:53:19 EDT
KB article

submitted to SME jthomas for tech review

Note You need to log in before you can comment on or make changes to this bug.