StorageProfiles ease the burden of selecting the optimal PVC parameters (AccessMode and VolumeMode) for known CSI provisioners. As we encounter new provisioners we should determine the correct values and update our internal table in CDI so that our customers will get a better out of the box experience. Having a correct storage profile can also make VM provisioning order of magnitude faster (host copy vs. CSI clones). Several customer support cases have already arisen because the OCP certified provisioner had an empty or incorrect admin provided storage profile. Version-Release number of selected component (if applicable): OpenShift Virtualization 4.12.0 How reproducible: 100% Steps to Reproduce: Create DV with storage class which has no StorageProfile defaults (e.g. HPE provisioner csi.hpe.com) Actual results: Empty StorgaeProfile is created Expected results: StorgaeProfile with default AccessMode and VolumeMode should be created Additional info:
Test on CNV-v4.12.4-37, default AccessMode and VolumeMode is defined in hpe StorageProfile $ oc get sc hpe-standard -o yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" creationTimestamp: "2023-06-02T09:05:39Z" name: hpe-standard resourceVersion: "71213" uid: 9a900efa-1824-4711-b770-fa98a6300a50 parameters: accessProtocol: iscsi allowOverrides: description,limitIops,performancePolicy csi.storage.k8s.io/controller-expand-secret-name: custom-secret csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: custom-secret csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/node-publish-secret-name: custom-secret csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: custom-secret csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: custom-secret csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: Volume from HPE CSI Driver limitIops: "76800" performancePolicy: SQL Server provisioner: csi.hpe.com reclaimPolicy: Delete volumeBindingMode: Immediate $ oc get storageprofile hpe-standard -o yaml apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: creationTimestamp: "2023-06-02T09:05:39Z" generation: 1 labels: app: containerized-data-importer app.kubernetes.io/component: storage app.kubernetes.io/managed-by: cdi-controller app.kubernetes.io/part-of: hyperconverged-cluster app.kubernetes.io/version: 4.12.4 cdi.kubevirt.io: "" name: hpe-standard ownerReferences: - apiVersion: cdi.kubevirt.io/v1beta1 blockOwnerDeletion: true controller: true kind: CDI name: cdi-kubevirt-hyperconverged uid: 28944510-d4e8-411d-8556-6c67d1ef2119 resourceVersion: "71215" uid: fcce2152-efdb-4fa2-b940-bbcec6b8b685 spec: {} status: claimPropertySets: - accessModes: - ReadWriteOnce volumeMode: Block - accessModes: - ReadWriteOnce volumeMode: Filesystem provisioner: csi.hpe.com storageClass: hpe-standard
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Virtualization 4.12.4 Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2023:3889