This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 814639 - Condor sometimes fails to clean cgroup entries at job exit
Condor sometimes fails to clean cgroup entries at job exit
Status: CLOSED WONTFIX
Product: Fedora
Classification: Fedora
Component: condor (Show other bugs)
16
x86_64 Linux
unspecified Severity medium
: ---
: ---
Assigned To: Matthew Farrellee
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-04-20 06:15 EDT by Bert DeKnuydt
Modified: 2013-02-13 11:32 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-02-13 11:31:57 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Bert DeKnuydt 2012-04-20 06:15:07 EDT
Description of problem:

Condor does not always clean up the entries it creates
under cgroups

Version-Release number of selected component (if applicable):

condor-7.7.5-0.2.fc16.x86_64

How reproducible:

Always (given enough jobs)

Steps to Reproduce:
1. Run a condor compute node with cgroups enabled, e.g.
   have in the config:
   BASE_CGROUP = /condor
2. Have (lots) of jobs run on the machine
3. Have a look in e.g.
   /sys/fs/cgroup/cpu,cpuacct/condor

Actual results:

I see e.g. a bunch of directory entries job_<number>_<number>
I'd expect only to see those of jobs that are currently running.

E.g. on a compute node, up 27 days:

# condor_status rat

Name               OpSys      Arch   State     Activity LoadAv Mem   ActvtyTime

slot1@rat.esat.kul LINUX      X86_64 Unclaimed Idle     1.000  80680  0+00:08:07
slot1_4@rat.esat.k LINUX      X86_64 Claimed   Busy     0.990  8000  0+00:16:38
slot1_5@rat.esat.k LINUX      X86_64 Claimed   Busy     0.990  8000  0+00:16:17
[...]

So 2 jobs running.

But:
# ls -1d job* | wc -l
72

# ls -d 
ls -d job* 
job_1002_0  job_2452_0  job_2457_0    job_25606_34  job_2661_0  job_2697_0 
[...many more...]

Expected results:

Only entries for currently running jobs.

Additional info:

*) It's not that no entries are cleaned.  1000's of jobs ran on this
   machine.

*) I don't know if condor recycles job numbers, but if so, that will
   cause problems if these entries pre-exist

*) For the jobs of which these entries remain, I see in the resp. 
   StarterLog:
04/19/12 15:55:28 DaemonCore: command socket at <10.33.133.103:9676>
04/19/12 15:55:28 DaemonCore: private command socket at <10.33.133.103:9676>
04/19/12 15:55:28 Setting maximum accepts per cycle 8.
04/19/12 15:55:28 Communicating with shadow <10.33.133.88:9610?noUDP>
04/19/12 15:55:28 Submitting machine is "qayd.esat.kuleuven.be"
04/19/12 15:55:28 setting the orig job name in starter
04/19/12 15:55:28 setting the orig job iwd in starter
04/19/12 15:55:28 sysapi_disk_space_raw: Free disk space kbytes overflow, capping to INT_MAX
04/19/12 15:55:28 Done setting resource limits
04/19/12 15:55:28 Job 1002.0 set to execute immediately
04/19/12 15:55:28 Starting a VANILLA universe job with ID: 1002.0
04/19/12 15:55:28 IWD: /users/visics/bfernand/Documents/Projects/Domain Adaptation/Codes/mining_oxford/
04/19/12 15:55:29 Output file: /users/visics/bfernand/Documents/Projects/Jobs/out/oxford_ocl_data_query.out
04/19/12 15:55:29 Error file: /users/visics/bfernand/Documents/Projects/Jobs/err/oxford_ocl_data_query.err
04/19/12 15:55:29 Renice expr "19" evaluated to 19
04/19/12 15:55:29 Using wrapper /usr/libexec/condor/condor_job_wrapper to exec /software/matlab/2010b/bin/matlab -nodisplay -nodesktop -r oxford_mining_ocl_query
04/19/12 15:55:29 Running job as user root
04/19/12 15:55:29 Create_Process succeeded, pid=16469
04/19/12 17:46:46 Got SIGQUIT.  Performing fast shutdown.
04/19/12 17:46:46 ShutdownFast all jobs.
04/19/12 17:46:46 error writing to named pipe: watchdog pipe has closed
04/19/12 17:46:46 LocalClient: error sending message to server
04/19/12 17:46:46 ProcFamilyClient: failed to start connection with ProcD
04/19/12 17:46:46 kill_family: ProcD communication error
04/19/12 17:46:46 ERROR "ProcD has failed" at line 621 in file /builddir/build/BUILD/condor-7.7.5/src/condor_utils/proc_family_proxy.cpp
04/19/12 18:12:36 Setting maximum accepts per cycle 8.

  So as the job finishes, something rotten happens...
Comment 1 Fedora End Of Life 2013-01-16 10:10:43 EST
This message is a reminder that Fedora 16 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 16. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '16'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 16's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 16 is end of life. If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora, you are encouraged to click on 
"Clone This Bug" and open it against that version of Fedora.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Comment 2 Fedora End Of Life 2013-02-13 11:32:00 EST
Fedora 16 changed to end-of-life (EOL) status on 2013-02-12. Fedora 16 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.

Note You need to log in before you can comment on or make changes to this bug.