Bug 671546 - hfs code cleanup for usage
Summary: hfs code cleanup for usage
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise MRG
Classification: Red Hat
Component: condor
Version: 1.3
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: 2.0
: ---
Assignee: Erik Erlandson
QA Contact: Lubos Trilety
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-01-21 20:13 UTC by Jon Thomas
Modified: 2011-06-27 14:33 UTC (History)
5 users (show)

Fixed In Version: condor-7.5.6-0.1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-06-27 14:33:23 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Jon Thomas 2011-01-21 20:13:54 UTC
1) remove groupUsed from negotiationTime and negotiateWithgroup

The variable groupUsed was a place holder to accumulate usage information from the group "none". 

With "None" now a formal group, accountant.GetWeightedResourcesUsed (etc) is used to get usage information.

If anything this is a code cleanup and a (small) performance issue. A bunch of code can be removed from negotiateWithGroup and any improvement will be good to have for users with negotiation rates set to 1 or something small.

2) I see some inconsistencies in use of

accountant.GetResourcesUsed
and
accountant.GetWeightedResourcesUsed

If weighted slots are turned off, these return the same value. If they are turned on, the values are quite different. Weighted slots are turned off with dynamic slots, but I think these should be made consistent.

Comment 1 Erik Erlandson 2011-01-25 16:57:51 UTC
(In reply to comment #0)

I noticed that groupUsed has been removed upstream, so I'm anticipating we get that cleanup when we migrate to 7.6 for grid 2.

I had used accountant.GetResourcesUsed() in the HFS-specific code path, since we are explicitly requiring non-weighted slots.  On the other hand, any future enhancements of HFS for supporting weighted slots would require moving back to GetWeightedResourcesUsed().  I think that the new iterative logic for adapting to rejections and overlapping-pool will also enable weighted slots to work pretty well with HFS -- experiment for 2.x inclusion?

Comment 2 Jon Thomas 2011-01-25 17:38:12 UTC
"I noticed that groupUsed has been removed upstream"

It could be the variable was used in older HFS code, then removed, and then I put it back in for tracking usage for "none". I added it in:

https://bugzilla.redhat.com/show_bug.cgi?id=629614

Upstream wouldn't have this yet.

I'm not sure there is anything that can be done to fix weighted slots (period) other than a rewrite that integrates "hfs" with negotiation. The problem is "ask for 1 slot and get 8". It's really not an HFS specific problem. It might work if we could convert a weighted slot to a partitionable slot on the fly, but the partitionable flag is at the startd.

Comment 3 Martin Kudlej 2011-03-07 13:23:22 UTC
What are the step to verify this issue, please?

Comment 4 Erik Erlandson 2011-03-07 15:23:55 UTC
(In reply to comment #3)
> What are the step to verify this issue, please?

These are code cleanup changes intended to make no change to functionality.  In that respect, verification would mean making sure HFS works as it did before (excepting any new changes that do alter functionality).

Comment 5 Lubos Trilety 2011-05-20 06:50:30 UTC
Tested with:
condor-7.6.1-0.5

Tested on:
RHEL5 i386,x86_64  - passed
RHEL6 i386,x86_64  - passed

>>> VERIFIED


Note You need to log in before you can comment on or make changes to this bug.