Bug 2023906
| Summary: | pod-identity-webhook permissions issue | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Kirk Bater <kbater> |
| Component: | Cloud Credential Operator | Assignee: | mworthin |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Jianping SHu <jshu> |
| Severity: | low | Docs Contact: | |
| Priority: | low | ||
| Version: | 4.8 | CC: | aweiteka, efried, jshu, lwan, sanchezl, vlaad |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-08-26 14:49:54 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Kirk Bater
2021-11-16 18:45:54 UTC
This is affecting clusters on both 4.8 and 4.9. I'm not sure if it goes back further as well. Is something not actually functioning? Or is this just about the messages logged by the Pod? unless a customer is using pod-identity-webhook in their applications (for example to authenticate their workloads to AWS to interact with AWS services like RDS), it's a harmless message because the cluster itself does not use it I don't know what the appropriate resolution is for this BZ. The messages are in fact harmless. But in BZ 2024613 , there really is a race condition that needs to be addressed. And it just so happens that the fix for BZ 2024613 will make the Pod messages in this BZ go away. Tested with 4.10.0-0.nightly-2021-11-18-225109(with PR 421) for Bug 2024613. Wait for hours and checked pod-identity-webhook logs, no such error message any more W1119 01:26:43.562759 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I1119 01:26:43.564378 1 main.go:191] Creating server I1119 01:26:43.564578 1 main.go:211] Listening on :9999 for metrics and healthz I1119 01:26:43.564645 1 main.go:205] Listening on :6443 2021/11/19 01:27:27 http: TLS handshake error from 10.128.0.1:33200: EOF 2021/11/19 01:34:17 http: TLS handshake error from 10.130.0.1:47978: EOF 2021/11/19 05:59:58 http: TLS handshake error from 10.130.0.1:44346: read tcp 10.130.0.27:6443->10.130.0.1:44346: read: connection reset by peer 2021/11/19 06:15:35 http: TLS handshake error from 10.130.0.1:35532: EOF I think this can be closed now, yeah? What's the process for that? Does QE need to validate it in a particulare GAed release or something? Verified again. Cluster installed and wait for hours, no such error. Client Version: 4.10.0-0.nightly-2021-12-01-072705 Server Version: 4.10.0-0.nightly-2022-02-26-230022 Kubernetes Version: v1.23.3+e419edf $ oc logs pod-identity-webhook-7f667555db-62dt2 -n openshift-cloud-credential-operator > pod-identity-webhook-7f667555db-62dt2.log $ cat pod-identity-webhook-7f667555db-62dt2.log r was specified. Using the inClusterConfig. This might not work. I0301 23:59:31.548333 1 main.go:191] Creating server I0301 23:59:31.548521 1 main.go:211] Listening on :9999 for metrics and healthz I0301 23:59:31.548670 1 main.go:205] Listening on :6443 |