Description of problem: Steps to Reproduce: 1. Install a cluster on AWS (and get AWS EBS as part of it) 2. Check CSI driver pods: oc -n openshift-cluster-csi-drivers get pod -o wide 3. Create a pod on a master that uses PVC. Actual results: 2. CSI driver pods run only on worker nodes ... aws-ebs-csi-driver-node-4sjs2 3/3 Running 0 67m 10.0.141.12 ip-10-0-141-12.ec2.internal <none> <none> aws-ebs-csi-driver-node-9zvn7 3/3 Running 0 67m 10.0.162.142 ip-10-0-162-142.ec2.internal <none> <none> aws-ebs-csi-driver-node-jzzgs 3/3 Running 0 67m 10.0.194.182 ip-10-0-194-182.ec2.internal <none> <none> ... 3. Masters can't use a PVC provided by the CSI driver Expected results: 2. Even masters have aws-ebs-csi-driver-node-* pod 3. Masters can use a PVC provided by the CSI driver
Verified pass NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.0-0.nightly-2020-08-12-071533 True False 14m Cluster version is 4.6.0-0.nightly-2020-08-12-071533 $ oc get node NAME STATUS ROLES AGE VERSION ip-10-0-137-216.us-east-2.compute.internal Ready worker 24m v1.19.0-rc.2+edbf229-dirty ip-10-0-141-255.us-east-2.compute.internal Ready master 35m v1.19.0-rc.2+edbf229-dirty ip-10-0-167-117.us-east-2.compute.internal Ready worker 24m v1.19.0-rc.2+edbf229-dirty ip-10-0-173-59.us-east-2.compute.internal Ready master 35m v1.19.0-rc.2+edbf229-dirty ip-10-0-207-200.us-east-2.compute.internal Ready master 35m v1.19.0-rc.2+edbf229-dirty ip-10-0-220-55.us-east-2.compute.internal Ready worker 24m v1.19.0-rc.2+edbf229-dirty $ oc -n openshift-cluster-csi-drivers get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES aws-ebs-csi-driver-controller-6c679787fd-mtgkf 5/5 Running 0 8m57s 10.0.167.117 ip-10-0-167-117.us-east-2.compute.internal <none> <none> aws-ebs-csi-driver-node-2x8tr 3/3 Running 0 21m 10.0.207.200 ip-10-0-207-200.us-east-2.compute.internal <none> <none> aws-ebs-csi-driver-node-4htpr 3/3 Running 0 21m 10.0.137.216 ip-10-0-137-216.us-east-2.compute.internal <none> <none> aws-ebs-csi-driver-node-f7qvz 3/3 Running 0 21m 10.0.220.55 ip-10-0-220-55.us-east-2.compute.internal <none> <none> aws-ebs-csi-driver-node-gghwm 3/3 Running 0 21m 10.0.141.255 ip-10-0-141-255.us-east-2.compute.internal <none> <none> aws-ebs-csi-driver-node-mghm4 3/3 Running 0 21m 10.0.167.117 ip-10-0-167-117.us-east-2.compute.internal <none> <none> aws-ebs-csi-driver-node-wrkhs 3/3 Running 0 21m 10.0.173.59 ip-10-0-173-59.us-east-2.compute.internal <none> <none> aws-ebs-csi-driver-operator-898f5bd6-jk4b9 1/1 Running 0 11m 10.131.0.6 ip-10-0-137-216.us-east-2.compute.internal <none> <none>
Make sure pod could run on master: $ oc get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mypod11 1/1 Running 0 48s 10.128.0.8 ip-10-0-141-255.us-east-2.compute.internal <none> <none> $ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mypvc11 Bound pvc-7fa54456-beb6-4612-8858-ac7ab4cd16a8 4Gi RWO gp2-csi 60s
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196