Bug 585212
Summary: | Recent updates to the collector caused a memory leak. | ||
---|---|---|---|
Product: | Red Hat Enterprise MRG | Reporter: | Timothy St. Clair <tstclair> |
Component: | condor | Assignee: | Timothy St. Clair <tstclair> |
Status: | CLOSED ERRATA | QA Contact: | Luigi Toscano <ltoscano> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | Development | CC: | ltoscano, matt, tao |
Target Milestone: | 1.3 | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2010-10-20 11:30:07 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Timothy St. Clair
2010-04-23 12:50:13 UTC
Classad deletion. Fixed in 7.4.3-0.11 How to quickly reproduce: - configure a cluster of at least two condor instances (1 Central Manager, >=1 Execute nodes) - enable Dynamic Slots on both (one big slot for each machine) - increase the number of "generated" slots with NUM_CPUS (at least 32) - submit a huge number of simple jobs (for example, a job description files which queues 15000 instances of "uname -a", each jdf submitted every 30 minutes) - watch memory (RSS) used by collector on CM With a simple cluster of two machines, RSS memory used by collector/7.4.3-0.10, increases quickly (in one or two hours), while it stays constants with condor-7.4.4-0.4 after one week of uninterrupted job processing. Verified on RHEL5.5, i386/x86_64. |