Bug 493810
| Summary: | RFE: Slot/Job requirement diagnostics from Startd | ||
|---|---|---|---|
| Product: | Red Hat Enterprise MRG | Reporter: | Jan Sarenik <jsarenik> |
| Component: | condor | Assignee: | Matthew Farrellee <matt> |
| Status: | CLOSED NOTABUG | QA Contact: | Jan Sarenik <jsarenik> |
| Severity: | high | Docs Contact: | |
| Priority: | medium | ||
| Version: | 1.1 | CC: | freznice, iboverma, matt, rrati |
| Target Milestone: | 1.3 | Keywords: | FutureFeature |
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Enhancement | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2010-06-04 12:01:13 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Jan Sarenik
2009-04-03 08:17:59 UTC
Issue at hand is the interaction of fetched jobs and partitionable slots. Slots that can be partitioned require Request* attributes on incoming jobs, which is fine. The problem is that the code path used to actually partition slots is not hit for fetched jobs. This means a slot cannot be partitioned by a fetched job. This is probably best as an RFE for information as to why the job or slot are rejecting one another. A workaround while debugging is to wrap the job's requirements or the slot's requirements in debug(). Target set to 1.3 What would be useful? Right now you can get a copy of the job and machine ads with D_JOB and D_MACHINE, and you'll get notification that their requirements were not met with either "Slot requirements not satisfied." or "Job requirements not satisfied." Is there a use-case to use partitioned slots and low-latency enabled at the same time? If not, I find no issues other than that I tested in badly configured environment. Otherwise I would like to know why normal condor_submit -ted jobs went fine, while low-lat jobs ended with requirements not satisfied. According to this, I may think about some useful and easy enhancement. There are such use-cases. The reason is condor_submit fills in a default RequestMemory and RequestDisk. The condor_submit tool is thick. In the low-latency case the writer of the message needs to replicate some of the knowledge embodied in condor_submit. A condor_submit -> AMQP is desirable. May be resurrected later if somebody asks for similar functionality, but it was dead and I already do not work on testing Condor for months. Closing as NOTABUG. |