Bug 799838 - Jobs in IDLE or RUNNING state aren't visible via aviary API after HISTORY_INTERVAL period.
Summary: Jobs in IDLE or RUNNING state aren't visible via aviary API after HISTORY_INT...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise MRG
Classification: Red Hat
Component: condor-aviary
Version: 2.1
Hardware: All
OS: Linux
medium
medium
Target Milestone: 2.2
: ---
Assignee: Pete MacKinnon
QA Contact: Daniel Horák
URL:
Whiteboard:
Depends On:
Blocks: 828434 832968
TreeView+ depends on / blocked
 
Reported: 2012-03-05 08:16 UTC by Daniel Horák
Modified: 2012-10-30 13:25 UTC (History)
5 users (show)

Fixed In Version: condor-7.6.5-0.16
Doc Type: Bug Fix
Doc Text:
Cause: Jobs submitted through Aviary or QMF interface. Consequence: The active jobs and their submissions are not reported by the Aviary or QMF query interface. Historic (e.g., completed) are however. Fix: Code changes to make sure that history index files were correctly cleaned up led to a situation where the internal job and submission collections were aggressively purged of active jobs. This code was corrected. Result: Jobs and submissions for active jobs appear as expected even while history indices are culled.
Clone Of:
Environment:
Last Closed: 2012-09-19 17:42:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Helpful script for testing this bz. (1.56 KB, text/x-python)
2012-06-29 12:18 UTC, Daniel Horák
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 832968 1 None None None 2021-01-20 06:05:38 UTC
Red Hat Bugzilla 842665 1 None None None 2021-01-20 06:05:38 UTC
Red Hat Product Errata RHSA-2012:1278 0 normal SHIPPED_LIVE Moderate: Red Hat Enterprise MRG Grid 2.2 security update 2012-09-19 21:40:26 UTC

Internal Links: 832968 842665

Description Daniel Horák 2012-03-05 08:16:54 UTC
Description of problem:
  Jobs are visible via aviary API only for a moment (after submit), then disappear until each of them is completed.
  (The "moment" is probably defined as HISTORY_INTERVAL.)

Version-Release number of selected component (if applicable):
  On booth platforms i386 and x86_64 on RHEL 5.8 and RHEL 6.2:
  # rpm -qa | grep condor
    condor-classads-7.6.5-0.12.el5.i386
    condor-aviary-7.6.5-0.12.el5.i386
    condor-7.6.5-0.12.el5.i386


How reproducible:
  100%


Steps to Reproduce:
1. Install "MRG Grid" and condor-aviary
  # yum install @mrg-grid condor-aviary

2. Configure HISTORY_INTERVAL in condor to short time.
  # cat /etc/condor/config.d/99_test.config 
    HISTORY_INTERVAL = 30
  # service condor restart

3. Submit few simple jobs.
  # cat /tmp/simple.job 
    universe = vanilla
    cmd = /bin/sleep
    args = 300
    output = /tmp/simple_job.$(cluster).$(process).out
    error = /tmp/simple_job.$(cluster).$(process).err
    Log = /tmp/simple_job.$(cluster).$(process).log
    requirements = (FileSystemDomain =!= UNDEFINED && Arch =!= UNDEFINED)
    queue 1
  
  # runuser -s /bin/bash -l condor -c "condor_submit /tmp/simple.job"
  # runuser -s /bin/bash -l condor -c "condor_submit /tmp/simple.job"
  # runuser -s /bin/bash -l condor -c "condor_submit /tmp/simple.job"

4. Periodically check output from aviary and compare it with output from condor_q. (eg.:)
  # date "+[%H:%M:%S]"
  # PYTHONPATH=":/usr/share/condor/aviary/module" python /usr/share/condor/aviary/jobquery.py --cmd=getJobStatus 
  # condor_q


Actual results:
  # date "+[%H:%M:%S]"
    [08:35:34]
  # PYTHONPATH=":/usr/share/condor/aviary/module" python /usr/share/condor/aviary/jobquery.py --cmd=getJobStatus 
    invoking http://localhost:9091/services/query/getJobStatus for job None
    [(JobStatus){
       id = 
          (JobID){
             job = "1.0"
             pool = "dhcp-lab-182.englab.brq.redhat.com"
             scheduler = "dhcp-lab-182.englab.brq.redhat.com"
          }
       status = 
          (Status){
             code = "OK"
          }
       job_status = "RUNNING"
     }, (JobStatus){
       id = 
          (JobID){
             job = "2.0"
             pool = "dhcp-lab-182.englab.brq.redhat.com"
             scheduler = "dhcp-lab-182.englab.brq.redhat.com"
          }
       status = 
          (Status){
             code = "OK"
          }
       job_status = "IDLE"
     }, (JobStatus){
       id = 
          (JobID){
             job = "3.0"
             pool = "dhcp-lab-182.englab.brq.redhat.com"
             scheduler = "dhcp-lab-182.englab.brq.redhat.com"
          }
       status = 
          (Status){
             code = "OK"
          }
       job_status = "IDLE"
     }]
  
  # condor_q
    -- Submitter: dhcp-lab-182.englab.brq.redhat.com : <10.34.33.182:51756> : dhcp-lab-182.englab.brq.redhat.com
     ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
       1.0   condor          3/5  08:35   0+00:00:16 R  0   0.0  sleep 300         
       2.0   condor          3/5  08:35   0+00:00:00 I  0   0.0  sleep 300         
       3.0   condor          3/5  08:35   0+00:00:00 I  0   0.0  sleep 300         
  
    3 jobs; 0 completed, 0 removed, 2 idle, 1 running, 0 held, 0 suspended
  
  # date "+[%H:%M:%S]"
    [08:36:13]
  
  # PYTHONPATH=":/usr/share/condor/aviary/module" python /usr/share/condor/aviary/jobquery.py --cmd=getJobStatus 
    invoking http://localhost:9091/services/query/getJobStatus for job None
    []
  
  # condor_q
    -- Submitter: dhcp-lab-182.englab.brq.redhat.com : <10.34.33.182:51756> : dhcp-lab-182.englab.brq.redhat.com
     ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
       1.0   condor          3/5  08:35   0+00:00:53 R  0   0.0  sleep 300         
       2.0   condor          3/5  08:35   0+00:00:00 I  0   0.0  sleep 300         
       3.0   condor          3/5  08:35   0+00:00:00 I  0   0.0  sleep 300         
  
    3 jobs; 0 completed, 0 removed, 2 idle, 1 running, 0 held, 0 suspended
  
  # date "+[%H:%M:%S]"
    [08:36:24]
  
Finished jobs are again accessible via Aviary:
  # date "+[%H:%M:%S]"
    [08:41:39]
  
  # PYTHONPATH=":/usr/share/condor/aviary/module" python /usr/share/condor/aviary/jobquery.py --cmd=getJobStatus 
    invoking http://localhost:9091/services/query/getJobStatus for job None
    [(JobStatus){
       id = 
          (JobID){
             job = "1.0"
             pool = "dhcp-lab-182.englab.brq.redhat.com"
             scheduler = "dhcp-lab-182.englab.brq.redhat.com"
          }
       status = 
          (Status){
             code = "OK"
          }
       job_status = "COMPLETED"
     }]

  # condor_q
    -- Submitter: dhcp-lab-182.englab.brq.redhat.com : <10.34.33.182:51756> : dhcp-lab-182.englab.brq.redhat.com
     ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
       2.0   condor          3/5  08:35   0+00:01:32 R  0   0.0  sleep 300         
       3.0   condor          3/5  08:35   0+00:00:00 I  0   0.0  sleep 300         
    
    2 jobs; 0 completed, 0 removed, 1 idle, 1 running, 0 held, 0 suspended
  
  # condor_history 
     ID      OWNER            SUBMITTED     RUN_TIME ST   COMPLETED CMD            
       1.0   condor          3/5  08:35   0+00:05:00 C   3/5  08:40 /bin/sleep 300 
  

Expected results:
  Jobs are visible via Aviary API through whole job cycle.


Additional info:

Comment 1 Pete MacKinnon 2012-04-30 19:37:05 UTC
Modifications for bug 699737 for cleaning up dangling history index files left the internal reset flags "stuck" on. Therefore internal job collections were continually being torn down and rebuilt, essentially throwing away the live jobs. History jobs were persisted and of course stable.

Comment 2 Pete MacKinnon 2012-05-04 20:23:32 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Cause: Jobs submitted through Aviary or QMF interface.
Consequence: The active jobs and their submissions are not reported by the Aviary or QMF query interface. Historic (e.g., completed) are however.
Fix: Code changes to make sure that history index files were correctly cleaned up led to a situation where the internal job and submission collections were aggressively purged of active jobs. This code was corrected.
Result: Jobs and submissions for active jobs appear as expected even while history indices are culled.

Comment 5 Daniel Horák 2012-06-29 12:18:40 UTC
Created attachment 595289 [details]
Helpful script for testing this bz.

This script print each (10) seconds list of jobs from AVIARY (with status)
and output of condor_q and condor_history commands.

Comment 6 Daniel Horák 2012-06-29 13:36:12 UTC
Tested on RHEL 5.8 and 6.3 - i386, x86_64 with:
# rpm -qa | grep condor
  condor-classads-7.6.5-0.16.el6.i686
  condor-aviary-7.6.5-0.16.el6.i686
  condor-7.6.5-0.16.el6.i686

All jobs are correctly visible via aviary with proper state through whole job live time.

>>> VERIFIED

Comment 8 errata-xmlrpc 2012-09-19 17:42:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2012-1278.html


Note You need to log in before you can comment on or make changes to this bug.