Applying the known workaround to enable the ability to create a PV when Nova Zone name is different from Cinder Zone name (https://github.com/openshift/installer/blob/master/docs/user/openstack/known-issues.md#cinder-availability-zones), the pvc is still stuck. It is using the standard storageClass which is deployed during installation and set as the default one. $ oc describe pvc/pvc2 Name: pvc2 Namespace: test StorageClass: standard Status: Pending Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder volume.kubernetes.io/selected-node: ostest-2sfsp-worker-0-z9fxb Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Used By: app Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal WaitForPodScheduled 7m50s (x363 over 13h) persistentvolume-controller waiting for pod app to be scheduled Warning ProvisioningFailed 3m20s (x344 over 13h) persistentvolume-controller Failed to provision volume with StorageClass "standard": failed to create a 1 GB volume: Bad request with: [POST https://10.46.22.71:13776/v3/93bf0e6974124c8a89ea715bd50d785a/volumes], error message: {"badRequest": {"code": 400, "message": "Availability zone 'AZ0' is invalid."}} Object definitions: $ cat pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: standard $ cat pod_with_pvc.yaml piVersion: v1 kind: Pod metadata: name: app spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP volumeMounts: - mountPath: /var/lib/www/data name: mydata volumes: - name: mydata persistentVolumeClaim: claimName: pvc1 readOnly: false $ oc get storageclass standard -o yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: [...] name: standard provisioner: kubernetes.io/cinder reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer To workaround above issue, it is required to create a new storageClass that has the availability parameter set to nova, as below: $ more sc-test.yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: sc-test provisioner: kubernetes.io/cinder parameters: availability: nova reclaimPolicy: Delete volumeBindingMode: Immediate In my opinion, either documentation should be updated including this, SCO should not require the availability parameter definition or the standard storageClass should include this parameter by default. Version: $ openshift-install version openshift-install 4.7.0-0.nightly-2020-12-14-165231 built from commit ec982ecf5bd847add0f3af82b04f32251972701c release image registry.svc.ci.openshift.org/ocp/release@sha256:c1efa679e0fc39b3a 9c6f3d3bce8e42b380420772a4c132aa696e1db6fc506e0 Platform: Openstack RHOS-16.1-RHEL-8-20201124.n.0. IPI installation with Kuryr enabled on hybrid setup.
Verified pass.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633