Bug 1472148
Summary: | Got "500 Internal Server Error" when watch bindings and instances of apigroup servicecatalog.k8s.io | ||||||
---|---|---|---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | weiwei jiang <wjiang> | ||||
Component: | Master | Assignee: | Jordan Liggitt <jliggitt> | ||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | weiwei jiang <wjiang> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 3.6.0 | CC: | aos-bugs, deads, dma, eparis, ewolinet, jforrest, jliggitt, jmatthew, jokerman, mmccomas, wjiang | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | |||||||
: | 1473523 (view as bug list) | Environment: | |||||
Last Closed: | 2017-08-16 19:38:00 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1470622, 1473523, 1474520 | ||||||
Attachments: |
|
Description
weiwei jiang
2017-07-18 07:33:28 UTC
Created attachment 1300304 [details]
watch operation got 500 for bindings and instances
Reassigning since all the other websockets are working in the console and this is specific to the svc catalog websocket connections. @weiwei can you confirm this server was installed using the ansible installer? Are those websocket connections going to the same hostname as the working websockets? We will need master logs from during this time to debug and possibly also logs from the svc catalog containers. Will need this to figure out if this is an aggregator problem or a svc catalog problem. Was this cluster created using `oc cluster up` or the installer? (In reply to Paul Morie from comment #3) > Was this cluster created using `oc cluster up` or the installer? The cluster is created by installer. And I got some useful log in service-catalog apiserver pod after page got 500 error: # oc logs -f apiserver-gtn7l -n kube-service-catalog |grep E0719 E0719 02:14:54.219751 1 watcher.go:188] watch chan error: etcdserver: mvcc: required revision has been compacted E0719 02:14:57.241773 1 watcher.go:188] watch chan error: etcdserver: mvcc: required revision has been compacted E0719 02:22:58.341435 1 watcher.go:188] watch chan error: etcdserver: mvcc: required revision has been compacted E0719 02:22:59.346418 1 watcher.go:188] watch chan error: etcdserver: mvcc: required revision has been compacted E0719 02:31:19.450048 1 watcher.go:188] watch chan error: etcdserver: mvcc: required revision has been compacted E0719 02:32:15.484653 1 watcher.go:188] watch chan error: etcdserver: mvcc: required revision has been compacted E0719 02:44:33.575782 1 watcher.go:188] watch chan error: etcdserver: mvcc: required revision has been compacted E0719 02:47:38.592010 1 watcher.go:188] watch chan error: etcdserver: mvcc: required revision has been compacted E0719 02:51:01.724747 1 watcher.go:188] watch chan error: etcdserver: mvcc: required revision has been compacted E0719 02:55:26.687264 1 watcher.go:188] watch chan error: etcdserver: mvcc: required revision has been compacted E0719 03:07:50.860714 1 watcher.go:188] watch chan error: etcdserver: mvcc: required revision has been compacted After deprovision, the "Provisioned Services" still keep on OverView page unless manual refresh the page. Actually the instance already removed. If the websocket watches are failing then the issue in comment 5 is expected. I'm able to recreate this locally on the console, however if I ssh into the node and run: $ oc policy can-i watch bindings --as=eric -n testproject yes $ oc policy can-i watch instances --as=eric -n testproject yes Where testproject is a newly created project and eric is a user that is an admin for testproject. When I updated a failed 500 request in devtools to add the "Authorization: Bearer" header with a valid token, I saw the request come back as a 200 Is this something the installer can configure to happen within the console? If so, what needs to be added where? Websockets from the browser can not use an Authorization bearer header. This should not be needed, the token is being passed via the Sec-Websocket-Protocol header, and it working fine against all other endpoints. If this is now failing against aggregated APIs then we have a problem, but we do not see this issue in the oc cluster up environment. What needs to be changed in the installer? Tell Scott EXACTLY what flag is set differently, what file needs to contain what, etc. I'm not seeing the root cause here. Unless I'm mistaken I believe that Eric needs to stand up a cluster with oc cluster up and a cluster with the installer, find the difference between them, and explain what exactly needs changed. ansible installs with a caBundle on the service catalog API service, cluster up installs with insecureSkipTLSVerify: true no other differences leapt out at me changing the APIService config to "insecureSkipTLSVerify: true" resolved the 500 looks like the upgrade path with TLS verification is not handled correctly server is returning this error: error dialing backend: x509: cannot validate certificate for 172.30.1.2 because it doesn't contain any IP SANs To recreate, ensure the APIService configured for the service catalog contains a caBundle, not insecureSkipTLSVerify: true kube issue: https://github.com/kubernetes/kubernetes/issues/49354 kube fix: https://github.com/kubernetes/kubernetes/pull/49353 origin 3.6 fix: https://github.com/openshift/origin/pull/15388 origin 3.7 fix: https://github.com/openshift/origin/pull/15390 *** Bug 1474520 has been marked as a duplicate of this bug. *** fixed in v3.6.170-1 Checked with # openshift version openshift v3.6.170 kubernetes v1.6.1+5115d708d7 etcd 3.2.1 and the issue can not be reproduced now. This was fixed in 3.6.0 |