Bug 2029570
Summary: | Azure Stack Hub: CSI Driver does not use user-ca-bundle | |||
---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Patrick Dillon <padillon> | |
Component: | Storage | Assignee: | Fabio Bertinatto <fbertina> | |
Storage sub component: | Storage | QA Contact: | Wei Duan <wduan> | |
Status: | CLOSED ERRATA | Docs Contact: | ||
Severity: | urgent | |||
Priority: | urgent | CC: | aos-bugs, jsafrane | |
Version: | 4.10 | |||
Target Milestone: | --- | |||
Target Release: | 4.10.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | If docs needed, set a value | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 2029571 (view as bug list) | Environment: | ||
Last Closed: | 2022-03-10 16:32:18 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 2029571 |
Description
Patrick Dillon
2021-12-06 19:29:01 UTC
Please reach out to Casey Carson to get access to our WWT environment. I am setting the priority/severity to urgent as this is blocking installs when a user requires an internal CI. My original comment mentions a workaround but to be clear, this BZ is about enabling the workaround. The workaround is currently broken. Tested with 4.10.0-0.nightly-2022-01-11-065245 and it looks like the CSI driver is starting successfully. Cluster has some issues with some other operators but I think this one is fixed. [m@fedora ASH-IPI]$ oc get pods -n openshift-cluster-csi-drivers NAME READY STATUS RESTARTS AGE azure-disk-csi-driver-controller-6f7cbbcc84-hr57s 11/11 Running 0 28m azure-disk-csi-driver-controller-6f7cbbcc84-k54kp 11/11 Running 0 28m azure-disk-csi-driver-node-4n6jc 3/3 Running 0 28m azure-disk-csi-driver-node-kb9jx 3/3 Running 0 28m azure-disk-csi-driver-node-pkjxq 3/3 Running 0 28m azure-disk-csi-driver-operator-84546b8dc9-f7hld 1/1 Running 0 28m [m@fedora ASH-IPI]$ oc logs azure-disk-csi-driver-node-4n6jc -c csi-driver -n openshift-cluster-csi-drivers I0111 15:52:18.710511 1 main.go:112] set up prometheus server on [::]:29604 I0111 15:52:18.710852 1 azuredisk.go:142] DRIVER INFORMATION: ------------------- Build Date: "2021-12-16T19:29:19Z" Compiler: gc Driver Name: disk.csi.azure.com Driver Version: v1.9.0 Git Commit: c0142b0408f0f25e9d0ceffe1b2706a9e72d312c Go Version: go1.17.2 Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0111 15:52:18.710871 1 azuredisk.go:145] driver userAgent: disk.csi.azure.com/v1.9.0 gc/go1.17.2 (amd64-linux) I0111 15:52:18.711763 1 azure_disk_utils.go:129] reading cloud config from secret kube-system/azure-cloud-provider W0111 15:52:18.719149 1 azure_disk_utils.go:136] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:openshift-cluster-csi-drivers:azure-disk-csi-driver-node-sa" cannot get resource "secrets" in API group "" in the namespace "kube-system" I0111 15:52:18.719172 1 azure_disk_utils.go:141] could not read cloud config from secret kube-system/azure-cloud-provider I0111 15:52:18.719178 1 azure_disk_utils.go:144] AZURE_CREDENTIAL_FILE env var set as /etc/kubernetes/cloud.conf I0111 15:52:18.719194 1 azure_disk_utils.go:159] read cloud config from file: /etc/kubernetes/cloud.conf successfully I0111 15:52:18.752343 1 azure_auth.go:119] azure: using client_id+client_secret to retrieve access token I0111 15:52:18.752466 1 azure.go:692] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=6, jitter=1.000000 I0111 15:52:18.752612 1 azure_diskclient.go:67] Azure DisksClient using API version: 2019-03-01 I0111 15:52:18.752660 1 azure.go:909] attach/detach disk operation rate limit QPS: 6.000000, Bucket: 10 I0111 15:52:18.890176 1 mount_linux.go:202] Cannot run systemd-run, assuming non-systemd OS I0111 15:52:18.890199 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME I0111 15:52:18.890207 1 driver.go:81] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME I0111 15:52:18.890210 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_SNAPSHOT I0111 15:52:18.890213 1 driver.go:81] Enabling controller service capability: LIST_SNAPSHOTS I0111 15:52:18.890216 1 driver.go:81] Enabling controller service capability: CLONE_VOLUME I0111 15:52:18.890219 1 driver.go:81] Enabling controller service capability: EXPAND_VOLUME I0111 15:52:18.890222 1 driver.go:81] Enabling controller service capability: LIST_VOLUMES I0111 15:52:18.890224 1 driver.go:81] Enabling controller service capability: LIST_VOLUMES_PUBLISHED_NODES I0111 15:52:18.890227 1 driver.go:81] Enabling controller service capability: SINGLE_NODE_MULTI_WRITER I0111 15:52:18.890231 1 driver.go:100] Enabling volume access mode: SINGLE_NODE_WRITER I0111 15:52:18.890234 1 driver.go:100] Enabling volume access mode: SINGLE_NODE_READER_ONLY I0111 15:52:18.890237 1 driver.go:100] Enabling volume access mode: SINGLE_NODE_SINGLE_WRITER I0111 15:52:18.890240 1 driver.go:100] Enabling volume access mode: SINGLE_NODE_MULTI_WRITER I0111 15:52:18.890243 1 driver.go:91] Enabling node service capability: STAGE_UNSTAGE_VOLUME I0111 15:52:18.890246 1 driver.go:91] Enabling node service capability: EXPAND_VOLUME I0111 15:52:18.890249 1 driver.go:91] Enabling node service capability: GET_VOLUME_STATS I0111 15:52:18.890252 1 driver.go:91] Enabling node service capability: SINGLE_NODE_MULTI_WRITER I0111 15:52:18.890553 1 server.go:117] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"} I0111 15:52:20.811653 1 utils.go:95] GRPC call: /csi.v1.Identity/GetPluginInfo I0111 15:52:20.811678 1 utils.go:96] GRPC request: {} I0111 15:52:20.812855 1 utils.go:102] GRPC response: {"name":"disk.csi.azure.com","vendor_version":"v1.9.0"} I0111 15:52:21.786724 1 utils.go:95] GRPC call: /csi.v1.Node/NodeGetInfo I0111 15:52:21.786747 1 utils.go:96] GRPC request: {} I0111 15:52:22.359338 1 utils.go:102] GRPC response: {"accessible_topology":{"segments":{"topology.disk.csi.azure.com/zone":""}},"max_volumes_per_node":32,"node_id":"ipi410gahagan-8smcg-master-1"} I0111 15:52:24.218730 1 utils.go:95] GRPC call: /csi.v1.Identity/GetPluginInfo I0111 15:52:24.218754 1 utils.go:96] GRPC request: {} I0111 15:52:24.218810 1 utils.go:102] GRPC response: {"name":"disk.csi.azure.com","vendor_version":"v1.9.0"} [m@fedora ASH-IPI]$ ./openshift-install version ./openshift-install 4.10.0-0.nightly-2022-01-11-065245 built from commit 28cfc831cee01eb503a2340b4d5365fd281bf867 release image registry.ci.openshift.org/ocp/release@sha256:d9759e7c8ec5e2555419d84ff36aff2a4c8f9367236c18e722a3fe4d7c4f6dee release architecture amd64 Many thanks for the verification. @Mike Gahagan Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056 |