Description of problem: 1) There is one type of job identification in return object: result = session.getObjects(_class="scheduler", _package="com.redhat.grid")[0].SubmitJob(ads) result.Id == hostname1#32 - SubmitHostname#ClusterId 2) There is another type of identification in job list in GetJobSummaries(): {u'Jobs': {u'1.0': {...}} - ClusterId.ProcId 3) Function GetJobAd() requires job id in form: ClusterId.ProcId I think it will be good to unify identification of job in QMF API and it should be GlobalJobId. And also I cannot use directly identification from result.Id in GetJobAd function. Version-Release number of selected component (if applicable): qmf-0.7.946106-12.el5 python-qmf-0.7.946106-8.el5 condor-qmf-7.4.4-0.9.el5 qmf-devel-0.7.946106-12.el5 condor-7.4.4-0.9.el5
FYI - (2), the fact that ClusterId.ProcId is present as a key should not be relied upon. It is an artifact of not being able to nest lists in maps (only maps could nest in maps).
(2) is no longer the case, GetJobSummaries as of 7.4.4-0.12 returns a list of maps.
(1) is actually SCHEDD_NAME#ClusterId of created submission. This can be used to locate the job. A job created with SubmitJob will always have a ProcId of 0. It would not be unreasonable to return SCHEDD_NAME#ClusterId.0, which is useful in a call to GetJobAd.
part of FH sha e7ea72c now we return schedd_name#cluster.proc...makes it easier for chaining test scripts after original submit (hold, release, getjobad, etc.)
Set to be built post 7.4.4-0.17
I've tested it on RHEL 4.9/5.6 x i386/x86_64 with: ruby-qmf-0.7.946106-27.el5 python-qmf-0.7.946106-14.el5 condor-wallaby-tools-3.9-2.el5 qmf-0.7.946106-27.el5 python-condorutils-1.4-6.el5 condor-7.4.5-0.7.el5 condor-wallaby-client-3.9-2.el5 condor-job-hooks-1.4-6.el5 condor-low-latency-1.1-2.el5 condor-wallaby-base-db-1.5-2.el5 condor-qmf-7.4.5-0.7.el5 and 1) and 2) are ok. 3) Is that right that hold, release, getjobad, etc. still need ClusterId.ProcId instead of schedd_name#cluster.proc?
It is for now. I suppose we could make it more consistent. The thing is if you are holding or releasing a job you have already identified (and connected to) the scheduler agent for that job.
Please verify without (3). (3) could be an RFE for further discussion, but does not have a good outlook.
3) I've created RFE bug 674290 1) 2) are ok so --> VERIFIED