Bug 1571111
| Summary: | The desiredNumberScheduled of DS is incorrect | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | DeShuai Ma <dma> |
| Component: | Master | Assignee: | Tomáš Nožička <tnozicka> |
| Status: | CLOSED NOTABUG | QA Contact: | Wang Haoran <haowang> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 3.10.0 | CC: | aos-bugs, deads, jliggitt, jokerman, mmccomas, wmeng |
| Target Milestone: | --- | ||
| Target Release: | 3.10.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-05-03 20:36:23 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
DeShuai Ma
2018-04-24 06:34:49 UTC
upstream tracked issue: https://github.com/kubernetes/kubernetes/issues/53023 Not sure that's it. I think in this case this is caused by the carry patch we have to target right nodes when project default node selector is present and we didn't patch the part counting status, but I'd have to check. > I think in this case this is caused by the carry patch we have to target right nodes when project default node selector is present and we didn't patch the part counting status, but I'd have to check
Adding David.
There was no node selector on the DS, indicating it wanted to run on all nodes, which is not allowed by the project's node selector.
I think the current behavior is accurate.
if we wanted to act as though the DS limited itself to the nodes allowed by the project selector, we could do this:
if matches, matchErr := dsc.namespaceNodeSelectorMatches(node, ds); matchErr != nil {
return false, false, false, matchErr
} else if !matches {
- shouldSchedule = false
- shouldContinueRunning = false
+ // This matches the behavior in the ErrNodeSelectorNotMatch case above
+ return false, false, false, nil
}
but that would make status not accurately reflect the intent expressed in the DS spec
The current behavior seems reasonable to me. You tried to place yourself on every node and only got two. I don't think I'm concerned enough about revealing the number of nodes in the cluster to adjust it. |