| Summary: | Failed to scale dc after sacle rc | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Wang Haoran <haowang> |
| Component: | openshift-controller-manager | Assignee: | Michal Fojtik <mfojtik> |
| Status: | CLOSED EOL | QA Contact: | zhou ying <yinzhou> |
| Severity: | low | Docs Contact: | |
| Priority: | medium | ||
| Version: | 3.2.0 | CC: | aos-bugs, wmeng, xiuwang |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-08-23 12:48:19 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Wang Haoran
2016-04-01 10:52:00 UTC
This appears to be an issue of timing -- it looks like the issue is that the when the RC's replica count is updated, the replica count in the DC isn't synced for a while, meaning that when the DC replica count is updated, the DC controller sees the differing replica counts, assumes that the RC is being updated independently, and syncs the RC's count (2) back to the DC, assuming that the RC has the canonical count. If scaling the RC poked the DC's sync loop, this would probably be less of a problem, since the RC->DC sync would occur "immediately" after the RC scale, meaning that the DC scale would occur with proper replica counts in order. Scaling a ReplicationController that is managed by a DeploymentConfig can lead to unpredictable behavior like this (timing issue). The correct action is to scale the DeploymentConfig. Lowering severity. 1 possible solution would be to print out a warning when a user tries to scale the RC instead of the DC, and instead of proceeding to scale the RC, just go ahead and scale the DC. |