| Summary: | router startup sometimes ignores scale | ||||||
|---|---|---|---|---|---|---|---|
| Product: | OKD | Reporter: | Aleksandar Kostadinov <akostadi> | ||||
| Component: | Deployments | Assignee: | Michal Fojtik <mfojtik> | ||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | zhou ying <yinzhou> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | medium | ||||||
| Version: | 3.x | CC: | akostadi, aos-bugs, mkargaki, mmccomas, pweil, sross | ||||
| Target Milestone: | --- | ||||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2017-05-30 12:50:09 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Attachments: |
|
||||||
|
Description
Aleksandar Kostadinov
2016-10-05 21:09:40 UTC
can you reproduce with `--loglevel=10`, just so we can double-check the requests that `oadm router` is sending? Created attachment 1209705 [details] loglevel 10 reproducer on origin It was not as consistent on origin but I could reproduce (see attached) with: > oc v1.4.0-alpha.0+8f6030a > kubernetes v1.4.0+776c994 > features: Basic-Auth GSSAPI Kerberos SPNEGO > > Server https://172.18.14.117:8443 > openshift v1.4.0-alpha.0+8f6030a > kubernetes v1.4.0+776c994 The actual requests and responses being sent to the API server look fine. Is there any way we could get the controller manager logs (at log level at least 4) from when this was running, to see if anything looks off in the DC controller? Also, can we get dumps of the DC and any deployments as YAML or JSON? If you tell me steps how to obtain that log, I'll give it a go. But perhaps it will be more time efficient to try reproducing in an environment you have access to. I can no longer reproduce on 3.4 I agree we are not going to backport this back to 3.3. QA: Can you verify this on 3.4? Can't reproduce this issue with latest OCP3.4: openshift version openshift v3.4.0.37+3b76456-1 kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 [root@ip-172-18-11-194 ~]# oc get dc NAME REVISION DESIRED CURRENT TRIGGERED BY docker-registry 2 3 3 config registry-console 1 1 1 config tester 1 2 2 config [root@ip-172-18-11-194 ~]# oc get po NAME READY STATUS RESTARTS AGE docker-registry-2-axuso 1/1 Running 0 45m docker-registry-2-sy6yu 1/1 Running 0 45m docker-registry-2-v0d2q 1/1 Running 0 45m registry-console-1-cp1i6 1/1 Running 0 44m tester-1-hni4v 1/1 Running 0 56s tester-1-zfpzh 1/1 Running 0 56s |