Red Hat Bugzilla – Bug 505923
dedicated scheduler may be inappropriately reusing claims
Last modified: 2010-10-14 12:11:35 EDT
Description of problem:
From condor-users -
My test Pool: 1 Dedicated Schedd,2 Startd
I set a concurrency limit in Negotiator Config "license1_LIMIT=2".
Then I submit 3 parallel jobs, each job requests 2 slots "machine_count = 2":
First job could not run because the concurrency limits exceed,and I removed first job from schedd,the second job started to run,but after the 2nd job completed,the 3rd job started running !!!.
When setting NEGOTIATOR_DEBUG to D_FULLDEBUG, I found sth wrong in logs, after 2nd job completed ,the SCHEDD would not communicate with NEGOTIATOR, and concurrency limits of jobs could not be checked.
Version-Release number of selected component (if applicable):
Claims do not appear to be released after a parallel universe job finishes. After my parallel job completed, my slots remained in 'claimed' state. These claims blocked execution of non-parallel job, but the slots were reusable by another parallel job.
[eje@rorschach ~]$ condor_status
Name OpSys Arch State Activity LoadAv Mem ActvtyTime
firstname.lastname@example.org LINUX X86_64 Claimed Idle 0.360 951 0+00:01:13
email@example.com LINUX X86_64 Claimed Idle 0.000 951 0+00:01:14
Total Owner Claimed Unclaimed Matched Preempting Backfill
X86_64/LINUX 2 0 2 0 0 0 0
Total 2 0 2 0 0 0 0
(In reply to comment #1)
> Claims do not appear to be released after a parallel universe job finishes.
> After my parallel job completed, my slots remained in 'claimed' state.
This behavior is intended, and governed by config parameter UNUSED_CLAIM_TIMEOUT.
Problem seems to be that claim re-use is not properly handling concurrency limits. In the repro example, third job should not be eligible since it exceeds concurrency limits.
pushed a fix to branch: V7_4-BZ505923-Ded-Schedd-Concurrency-Limits-branch
Tested with (version):
Test Pool: 1 Dedicated Schedd,1 Startd
Set a concurrency limit in Negotiator Config "license1_LIMIT=2".
1. Submit 3 parallel jobs, each job requests 2 slots "machine_count = 1":
2. First job could not run because the concurrency limits exceed.
3. Remove first job. Second job started to run.
4. After the 2nd job completed, check that the 3rd job could not run because the concurrency limits exceed (see logs).
RHEL4 x86_64 - passed
RHEL4 i386 - passed
RHEL5 x86_64 - passed
RHEL5 i386 - passed
Technical note added. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.
Previously, claims were not released after a parallel universe job finished, because claim re-use did not handle concurrency limits properly. With this update, all concurrency limits of jobs can be checked.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.