When an installation is configured to use Proxy some Pods that require access to the OpenStack API, like openstack-cinder-csi-driver-operator or cluster-network-operator (when using Kuryr) attempt to connect to the API directly without using the Proxy causing installation to fail. $oc logs -f openstack-cinder-csi-driver-operator-5f55fbf947-22xj9 -n openshift-cluster-csi-drivers W0723 14:16:57.930393 1 builder.go:99] graceful termination failed, controllers failed with error: couldn't collect info about cloud availability zones: failed to create a compute client: Get "https://38.x.x.91:13000/": dial tcp 38.x.x.91:13000: connect: no route to host Install-config.yaml used: apiVersion: v1 baseDomain: ci.vexxhost.cz compute: - name: worker platform: openstack: type: m1.xlarge additionalSecurityGroupIDs: ['b97c865e-95fa-4a92-8930-241425d33fd4'] replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge additionalSecurityGroupIDs: ['b97c865e-95fa-4a92-8930-241425d33fd4'] replicas: 3 metadata: name: ocp-central networking: machineNetwork: - cidr: 172.16.0.0/24 platform: openstack: cloud: openshift machinesSubnet: 6bb82a4f-de17-4872-8898-94cafa8ac81d apiVIP: 172.16.0.5 ingressVIP: 172.16.0.7 defaultMachinePlatform: type: m1.xlarge proxy: httpProxy: http://dummy:dummy@172.16.0.61:3128/ httpsProxy: https://dummy:dummy@172.16.0.61:3130/ pullSecret: | sshKey: | additionalTrustBundle: <cloud-ca> <ca-configured-on-squid> $ openstack server list | d5e24ad5-d8e5-436a-8ac2-8f52651e7c9f | ocp-central-hnprb-master-2 | ACTIVE | proxy=172.16.0.126 | ocp-central-hnprb-rhcos | m1.xlarge | | d1945d8d-1cf8-4afd-8ff5-c427869467bc | ocp-central-hnprb-master-1 | ACTIVE | proxy=172.16.0.146 | ocp-central-hnprb-rhcos | m1.xlarge | | 02f16d66-b589-4e9e-ae97-3b6cf980cfac | ocp-central-hnprb-master-0 | ACTIVE | proxy=172.16.0.201 | ocp-central-hnprb-rhcos | m1.xlarge | | 7589d476-2d1c-4ad1-bdcd-1ff1db1e9282 | bastion-proxy | ACTIVE | installer-network=10.196.2.27, 38.x.x.131; proxy=172.16.0.61 | centos8-stream | m1.medium | +--------------------------------------+--------------------------------+--------+-----------------------------------------------------------------+-------------------------+-----------+ $ openstack router show bastion +-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | central | | created_at | 2021-07-21T19:38:00Z | | description | | | external_gateway_info | {"network_id": "7ca1777f-24ab-41cf-add1-e4c1d8b81725", "external_fixed_ips": [{"subnet_id": "29065cbb-a0f3-480c-998e-c5bbb3854656", "ip_address": "38.x.x.218"}], "enable_snat": true} | | flavor_id | None | | id | 52299708-5de4-4681-bda8-c60c89520632 | | interfaces_info | [{"port_id": "c169e0b2-84ca-4d79-b805-bb3dbbb36bc8", "ip_address": "10.196.0.1", "subnet_id": "112bc049-b03e-4dae-a8bf-ced6f9674ebd"}] Version: Squid configuration on bastion VM: [centos@bastion-proxy ~]$ sudo cat /etc/squid/squid.conf acl localnet src 0.0.0.0/0 acl SSL_ports port 443 acl SSL_ports port 53 acl SSL_ports port 1025-65535 acl Safe_ports port 80 acl Safe_ports port 53 acl Safe_ports port 443 acl Safe_ports port 1025-65535 acl CONNECT method CONNECT http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localnet http_access deny all http_port 3128 https_port 3130 cert=/etc/squid/certs/domain.crt key=/etc/squid/certs/domain.key cafile=/etc/squid/certs/domain.crt # Leave coredumps in the first cache dir coredump_dir /var/spool/squid auth_param basic program /usr/lib64/squid/basic_ncsa_auth /etc/squid/passwd auth_param basic children 5 auth_param basic realm Squid Basic Authentication auth_param basic credentialsttl 2 hours acl auth_users proxy_auth REQUIRED http_access allow auth_users $ openshift-install version 4.8.0-0.nightly-2021-07-19-192457 with IPI
I have created a bz for kuryr with regards to cluster network operator https://bugzilla.redhat.com/show_bug.cgi?id=1985486
Adding test_blocker flag as profile '47_IPI on OSP16 & FloatingIPLess & Disconnected & http_proxy' on OCP QE CI is hitting this issue: [cloud-user@preserve-ocp4-shared-network-dis-bastion1 ~]$ ./oc get clusteroperators NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.9.0-0.nightly-2021-08-22-070405 True False False 53m baremetal 4.9.0-0.nightly-2021-08-22-070405 True False False 73m cloud-controller-manager 4.9.0-0.nightly-2021-08-22-070405 True False False 76m cloud-credential 4.9.0-0.nightly-2021-08-22-070405 True False False 76m cluster-autoscaler 4.9.0-0.nightly-2021-08-22-070405 True False False 73m config-operator 4.9.0-0.nightly-2021-08-22-070405 True False False 75m console 4.9.0-0.nightly-2021-08-22-070405 True False False 57m csi-snapshot-controller 4.9.0-0.nightly-2021-08-22-070405 True False False 74m dns 4.9.0-0.nightly-2021-08-22-070405 True False False 60m etcd 4.9.0-0.nightly-2021-08-22-070405 True False False 73m image-registry 4.9.0-0.nightly-2021-08-22-070405 True False False 61m ingress 4.9.0-0.nightly-2021-08-22-070405 True False False 60m insights 4.9.0-0.nightly-2021-08-22-070405 True False False 68m kube-apiserver 4.9.0-0.nightly-2021-08-22-070405 True False False 69m kube-controller-manager 4.9.0-0.nightly-2021-08-22-070405 True False False 69m kube-scheduler 4.9.0-0.nightly-2021-08-22-070405 True False False 71m kube-storage-version-migrator 4.9.0-0.nightly-2021-08-22-070405 True False False 7m47s machine-api 4.9.0-0.nightly-2021-08-22-070405 True False False 70m machine-approver 4.9.0-0.nightly-2021-08-22-070405 True False False 74m machine-config 4.9.0-0.nightly-2021-08-22-070405 True False False 60m marketplace 4.9.0-0.nightly-2021-08-22-070405 True False False 74m monitoring 4.9.0-0.nightly-2021-08-22-070405 True False False 59m network 4.9.0-0.nightly-2021-08-22-070405 True False False 75m node-tuning 4.9.0-0.nightly-2021-08-22-070405 True False False 74m openshift-apiserver 4.9.0-0.nightly-2021-08-22-070405 True False False 60m openshift-controller-manager 4.9.0-0.nightly-2021-08-22-070405 True False False 66m openshift-samples 4.9.0-0.nightly-2021-08-22-070405 True False False 65m operator-lifecycle-manager 4.9.0-0.nightly-2021-08-22-070405 True False False 74m operator-lifecycle-manager-catalog 4.9.0-0.nightly-2021-08-22-070405 True False False 74m operator-lifecycle-manager-packageserver 4.9.0-0.nightly-2021-08-22-070405 True False False 7m47s service-ca 4.9.0-0.nightly-2021-08-22-070405 True False False 75m storage 4.9.0-0.nightly-2021-08-22-070405 False True False 75m OpenStackCinderCSIDriverOperatorCRAvailable: Waiting for OpenStackCinder operator to report status [cloud-user@preserve-ocp4-shared-network-dis-bastion1 ~]$ [cloud-user@preserve-ocp4-shared-network-dis-bastion1 ~]$ ./oc logs -n openshift-cluster-csi-drivers openstack-cinder-csi-driver-operator-5c455b8ff-95hxv | grep builder I0823 16:10:09.498938 1 builder.go:252] openstack-cinder-csi-driver-operator version 4.9.0-202108201456.p0.git.352770b.assembly.stream-352770b-352770be61a3c53d58dd6a65ed4fc366afd442dc W0823 16:12:51.646410 1 builder.go:101] graceful termination failed, controllers failed with error: couldn't collect info about cloud availability zones: failed to create a compute client: Post "https://rhos-d.infra.prod.upshift.rdu2.redhat.com:13000/v3/auth/tokens": dial tcp 192.168.0.8:13000: connect: connection refused
Verified on 4.9.0-0.nightly-2021-08-23-224104 on PSI (Openstack OSP16.1). Profile '47_IPI on OSP16 & FloatingIPLess & Disconnected & http_proxy' installation on OCP QE CI runs successfully. 08-24 09:50:18.389 level=debug msg=Time elapsed per stage: 08-24 09:50:18.389 level=debug msg= : 6m7s 08-24 09:50:18.389 level=debug msg=Bootstrap Complete: 9m3s 08-24 09:50:18.389 level=debug msg= Bootstrap Destroy: 36s 08-24 09:50:18.389 level=debug msg= Cluster Operators: 20m46s 08-24 09:50:18.389 level=info msg=Time elapsed: 38m27s $ ./oc get clusteroperators NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.9.0-0.nightly-2021-08-23-224104 True False False 39m baremetal 4.9.0-0.nightly-2021-08-23-224104 True False False 62m cloud-controller-manager 4.9.0-0.nightly-2021-08-23-224104 True False False 66m cloud-credential 4.9.0-0.nightly-2021-08-23-224104 True False False 67m cluster-autoscaler 4.9.0-0.nightly-2021-08-23-224104 True False False 62m config-operator 4.9.0-0.nightly-2021-08-23-224104 True False False 65m console 4.9.0-0.nightly-2021-08-23-224104 True False False 44m csi-snapshot-controller 4.9.0-0.nightly-2021-08-23-224104 True False False 64m dns 4.9.0-0.nightly-2021-08-23-224104 True False False 62m etcd 4.9.0-0.nightly-2021-08-23-224104 True False False 63m image-registry 4.9.0-0.nightly-2021-08-23-224104 True False False 48m ingress 4.9.0-0.nightly-2021-08-23-224104 True False False 46m insights 4.9.0-0.nightly-2021-08-23-224104 True False False 52m kube-apiserver 4.9.0-0.nightly-2021-08-23-224104 True False False 61m kube-controller-manager 4.9.0-0.nightly-2021-08-23-224104 True False False 61m kube-scheduler 4.9.0-0.nightly-2021-08-23-224104 True False False 63m kube-storage-version-migrator 4.9.0-0.nightly-2021-08-23-224104 True False False 65m machine-api 4.9.0-0.nightly-2021-08-23-224104 True False False 57m machine-approver 4.9.0-0.nightly-2021-08-23-224104 True False False 63m machine-config 4.9.0-0.nightly-2021-08-23-224104 True False False 63m marketplace 4.9.0-0.nightly-2021-08-23-224104 True False False 62m monitoring 4.9.0-0.nightly-2021-08-23-224104 True False False 44m network 4.9.0-0.nightly-2021-08-23-224104 True False False 64m node-tuning 4.9.0-0.nightly-2021-08-23-224104 True False False 62m openshift-apiserver 4.9.0-0.nightly-2021-08-23-224104 True False False 59m openshift-controller-manager 4.9.0-0.nightly-2021-08-23-224104 True False False 56m openshift-samples 4.9.0-0.nightly-2021-08-23-224104 True False False 55m operator-lifecycle-manager 4.9.0-0.nightly-2021-08-23-224104 True False False 63m operator-lifecycle-manager-catalog 4.9.0-0.nightly-2021-08-23-224104 True False False 63m operator-lifecycle-manager-packageserver 4.9.0-0.nightly-2021-08-23-224104 True False False 59m service-ca 4.9.0-0.nightly-2021-08-23-224104 True False False 65m storage 4.9.0-0.nightly-2021-08-23-224104 True False False 59m $ ./oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.0-0.nightly-2021-08-23-224104 True False 39m Cluster version is 4.9.0-0.nightly-2021-08-23-224104 ./oc get pods -n openshift-cluster-csi-drivers openstack-cinder-csi-driver-operator-65b87cfdb7-lrhlb -o yaml | grep ^spec: -A30 | grep HTTP -A2 - name: HTTPS_PROXY value: http://192.168.0.8:8888 - name: HTTP_PROXY value: http://192.168.0.8:8888 PVCs on intree-cinder, outtree-cinder and manila were created successfully on the setup: $ ./oc get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test pvc1 Bound pvc-de5b93d5-0c76-498a-a9da-5a59b1d24ff4 1Gi RWO sc-test-outtree 6m28s test pvc2 Bound pvc-8f95b883-a34e-42fe-b236-7cfd8a858998 1Gi RWO sc-test-intree 10m test pvc3 Bound pvc-493fffb3-8a2e-4c3f-acbb-b86e1ff12e7c 1Gi RWO csi-manila-ceph 3m45s The previous 4.9 without the fix that was hitting the issue was 4.9.0-0.nightly-2021-08-22-070405
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3759