Bumping severity because it blocks install behind a proxy.
Verified on 4.8.0-0.nightly-2021-09-06-042819 on top of OSP16.1 (RHOS-16.1-RHEL-8-20210604.n.0) using OpenShiftSDN network type. The installation was performed using IPI on restricted network and configuring a proxy with http and https: --- install-config.yaml section --- platform: openstack: cloud: "shiftstack" externalNetwork: "" region: "regionOne" computeFlavor: "m4.xlarge" machinesSubnet: 5bd85e62-3487-4d41-977f-f508c1f40045 apiVIP: "172.16.0.5" ingressVIP: "172.16.0.7" proxy: httpProxy: http://dummy:dummy@172.16.0.3:3128/ httpsProxy: https://dummy:dummy@172.16.0.3:3130/ ---- The error log mentioned on the bug description is not appearing: $ oc logs machine-api-controllers-68b7c76784-l7zmk -n openshift-machine-api -c machine-controller | grep "Failed to authenticate provider client" [cloud-user@installer-host ~]$ And the nodes were successfully deployed: $ oc get nodes NAME STATUS ROLES AGE VERSION ostest-mfhzv-master-0 Ready master 70m v1.21.1+9807387 ostest-mfhzv-master-1 Ready master 70m v1.21.1+9807387 ostest-mfhzv-master-2 Ready master 70m v1.21.1+9807387 ostest-mfhzv-worker-0-8rrrn Ready worker 48m v1.21.1+9807387 ostest-mfhzv-worker-0-bgvnz Ready worker 47m v1.21.1+9807387 ostest-mfhzv-worker-0-ncqj6 Ready worker 48m v1.21.1+9807387 $ oc -n openshift-machine-api get pods NAME READY STATUS RESTARTS AGE cluster-autoscaler-operator-8b565f5b4-cwzd8 2/2 Running 0 73m cluster-baremetal-operator-747bc97d67-bd54c 2/2 Running 5 73m machine-api-controllers-68b7c76784-l7zmk 7/7 Running 0 61m machine-api-operator-5467b94745-rlpsl 2/2 Running 1 73m $ oc -n openshift-machine-api -c machine-controller rsh machine-api-controllers-68b7c76784-l7zmk sh-4.4$ env | grep -i proxy HTTP_PROXY=http://dummy:dummy@172.16.0.3:3128/ NO_PROXY=.cluster.local,.svc,10.128.0.0/14,127.0.0.1,169.254.169.254,172.16.0.0/24,172.30.0.0/16,api-int.ostest.shiftstack.com,localhost HTTPS_PROXY=https://dummy:dummy@172.16.0.3:3130/ sh-4.4$ Please note that installation is not completed successfully because the Storage clusteroperator is degraded due to https://bugzilla.redhat.com/show_bug.cgi?id=1996672 $ oc logs -n openshift-cluster-csi-drivers openstack-cinder-csi-driver-operator-cdb55587b-pxfwk | tail -2 I0907 09:50:41.219043 1 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-cluster-csi-drivers", Name:"openstack-cinder-csi-driver-operator-lock", UID:"64cd106c-2953-42dc-a781-3774d0d13f2d", APIVersion:"v1", ResourceVersion:"45917", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openstack-cinder-csi-driver-operator-cdb55587b-pxfwk_51e75064-f774-41a2-ae2f-8e9248709ab9 became leader W0907 09:50:44.319558 1 builder.go:99] graceful termination failed, controllers failed with error: couldn't collect info about cloud availability zones: failed to create a compute client: Get "https://10.46.44.10:13000/": dial tcp 10.46.44.10:13000: connect: no route to host
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.8.11 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3429