Red Hat Bugzilla – Bug 493810
RFE: Slot/Job requirement diagnostics from Startd
Last modified: 2010-06-04 08:01:31 EDT
Since I was testing https://bugzilla.redhat.com/show_bug.cgi?id=472607
I kept adding the following lines to condor_config.local:
SLOT_TYPE_1 = CPUS=100%,DISK=100%,SWAP=100%
SLOT_TYPE_1_PARTITIONABLE = TRUE
NUM_SLOTS = 1
NUM_SLOTS_TYPE_1 = 1
These lines enable dynamic provisioning.
Now I wanted to test some low-latency job and despite all jobs
submitted via condor_submit worked well, I could not find out
why StartLog says "Slot requirements not satisfied." for any
AMQP submitted low-latency job. There was no helpful message
in log even with
ALL_DEBUG = D_FULLDEBUG D_JOB D_MACHINE D_COMMAND
After removing four lines mentioned above, everything started
to work seamlessly.
But even with those lines, I was able to make the AMQP-submitted
job run by adding RequestDisk and RequestMemory classads to
the job (as I was hinted by Matt).
Version affected (I am not sure all are used, but including
them for reference):
I would expect it either to throw a meaningful message into
the log, or to run the job without hassle (not requiring
Request* ads). I think the latter implies the former one
Issue at hand is the interaction of fetched jobs and partitionable slots. Slots that can be partitioned require Request* attributes on incoming jobs, which is fine. The problem is that the code path used to actually partition slots is not hit for fetched jobs. This means a slot cannot be partitioned by a fetched job.
This is probably best as an RFE for information as to why the job or slot are rejecting one another.
A workaround while debugging is to wrap the job's requirements or the slot's requirements in debug().
Target set to 1.3
What would be useful?
Right now you can get a copy of the job and machine ads with D_JOB and D_MACHINE, and you'll get notification that their requirements were not met with either "Slot requirements not satisfied." or "Job requirements not satisfied."
Is there a use-case to use partitioned slots and low-latency
enabled at the same time? If not, I find no issues other than
that I tested in badly configured environment.
Otherwise I would like to know why normal condor_submit -ted
jobs went fine, while low-lat jobs ended with requirements
According to this, I may think about some useful and easy
There are such use-cases. The reason is condor_submit fills in a default RequestMemory and RequestDisk. The condor_submit tool is thick. In the low-latency case the writer of the message needs to replicate some of the knowledge embodied in condor_submit.
A condor_submit -> AMQP is desirable.
May be resurrected later if somebody asks for similar functionality,
but it was dead and I already do not work on testing Condor for months.
Closing as NOTABUG.