Description of problem: When a user adds a disk to a VM via Disks tab, the added disk doesn't have volumeMode set to value configured in Version-Release number of selected component (if applicable): 4.2.0-0.nightly-2019-09-19-040356 HCO_BUNDLE_REGISTRY_TAG=v2.1.0-56 How reproducible: 100% Steps to Reproduce: 1. create a VM 2. add to kubevirt-storage-class-defaults configMap rook-ceph-block.accessMode: ReadWriteMany rook-ceph-block.volumeMode: Block and leave the defaults as: accessMode: ReadWriteMany volumeMode: Filesystem 3. Add a disk to the created VM that uses rook-ceph-block SC Actual results: The created PVC has volumeMode Filesystem, instead of Block, as is defined in kubevirt-storage-class-defaults configMap Expected results: Additional info:
https://github.com/kubevirt/web-ui-components/pull/555 and https://github.com/openshift/console/pull/2782
*** Bug 1755234 has been marked as a duplicate of this bug. ***
Backport: https://github.com/kubevirt/web-ui-components/pull/564
verified on 4.3.0-0.nightly-2019-11-02-092336, the mode is block: "volumeMode: Block". oc get cm -n openshift-cnv kubevirt-storage-class-defaults -o yaml apiVersion: v1 data: accessMode: ReadWriteMany local-sc.accessMode: ReadWriteOnce local-sc.volumeMode: Filesystem rook-ceph-block.accessMode: ReadWriteMany rook-ceph-block.volumeMode: Block volumeMode: Filesystem kind: ConfigMap metadata: creationTimestamp: "2019-11-08T03:29:27Z" labels: app: hyperconverged-cluster name: kubevirt-storage-class-defaults namespace: openshift-cnv ownerReferences: - apiVersion: hco.kubevirt.io/v1alpha1 blockOwnerDeletion: true controller: true kind: HyperConverged name: hyperconverged-cluster uid: 982553f1-dcf4-494b-bcd6-9abf4e22cf5a resourceVersion: "3668949" selfLink: /api/v1/namespaces/openshift-cnv/configmaps/kubevirt-storage-class-defaults uid: 188c1132-5cd0-4f9e-bd9c-a45e0be7bb92 $ oc get pvc test-disk1 -o yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/storage.contentType: kubevirt cdi.kubevirt.io/storage.import.source: none cdi.kubevirt.io/storage.pod.phase: Succeeded pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com creationTimestamp: "2019-11-12T05:18:48Z" finalizers: - kubernetes.io/pvc-protection labels: app: containerized-data-importer cdi-controller: test-disk1 name: test-disk1 namespace: default ownerReferences: - apiVersion: cdi.kubevirt.io/v1alpha1 blockOwnerDeletion: true controller: true kind: DataVolume name: test-disk1 uid: e0d58db7-9071-43b1-bb0f-305eea4ce7f2 resourceVersion: "3669067" selfLink: /api/v1/namespaces/default/persistentvolumeclaims/test-disk1 uid: 8209663d-848b-43ff-a0d1-36d52fb9d3ff spec: accessModes: - ReadWriteMany resources: requests: storage: 2Gi storageClassName: rook-ceph-block volumeMode: Block volumeName: pvc-8209663d-848b-43ff-a0d1-36d52fb9d3ff status: accessModes: - ReadWriteMany capacity: storage: 2Gi phase: Bound
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0062