Bug 1753688

Summary: Adding disk via VM Disks tab always adds a disk with 'Filesystem' VolumeMode
Product: OpenShift Container Platform Reporter: Radim Hrazdil <rhrazdil>
Component: Console Kubevirt PluginAssignee: Filip Krepinsky <fkrepins>
Status: CLOSED ERRATA QA Contact: Nelly Credi <ncredi>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.2.0CC: aos-bugs, fkrepins, gouyang, mlibra, tjelinek
Target Milestone: ---   
Target Release: 4.3.0   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Cause: no volumeMode was passed down to newly created disks Consequence: PVCs might not bind Fix: pass the volumeMode to new disks
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-01-23 11:06:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Radim Hrazdil 2019-09-19 15:05:49 UTC
Description of problem:
When a user adds a disk to a VM via Disks tab, the added disk doesn't have volumeMode set to value configured in 

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. create a VM
2. add to kubevirt-storage-class-defaults configMap
rook-ceph-block.accessMode: ReadWriteMany
rook-ceph-block.volumeMode: Block
and leave the defaults as:
accessMode: ReadWriteMany
volumeMode: Filesystem
3. Add a disk to the created VM that uses rook-ceph-block SC

Actual results:
The created PVC has volumeMode Filesystem, instead of Block, as is defined in kubevirt-storage-class-defaults configMap

Expected results:

Additional info:

Comment 4 Andrew Burden 2019-09-27 10:43:30 UTC
*** Bug 1755234 has been marked as a duplicate of this bug. ***

Comment 5 Marek Libra 2019-10-18 07:41:42 UTC
Backport: https://github.com/kubevirt/web-ui-components/pull/564

Comment 8 Guohua Ouyang 2019-11-12 05:20:54 UTC
verified on 4.3.0-0.nightly-2019-11-02-092336, the mode is block: "volumeMode: Block".

 oc get cm -n openshift-cnv kubevirt-storage-class-defaults -o yaml
apiVersion: v1
  accessMode: ReadWriteMany
  local-sc.accessMode: ReadWriteOnce
  local-sc.volumeMode: Filesystem
  rook-ceph-block.accessMode: ReadWriteMany
  rook-ceph-block.volumeMode: Block
  volumeMode: Filesystem
kind: ConfigMap
  creationTimestamp: "2019-11-08T03:29:27Z"
    app: hyperconverged-cluster
  name: kubevirt-storage-class-defaults
  namespace: openshift-cnv
  - apiVersion: hco.kubevirt.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: HyperConverged
    name: hyperconverged-cluster
    uid: 982553f1-dcf4-494b-bcd6-9abf4e22cf5a
  resourceVersion: "3668949"
  selfLink: /api/v1/namespaces/openshift-cnv/configmaps/kubevirt-storage-class-defaults
  uid: 188c1132-5cd0-4f9e-bd9c-a45e0be7bb92

$ oc get pvc test-disk1 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
    cdi.kubevirt.io/storage.contentType: kubevirt
    cdi.kubevirt.io/storage.import.source: none
    cdi.kubevirt.io/storage.pod.phase: Succeeded
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
  creationTimestamp: "2019-11-12T05:18:48Z"
  - kubernetes.io/pvc-protection
    app: containerized-data-importer
    cdi-controller: test-disk1
  name: test-disk1
  namespace: default
  - apiVersion: cdi.kubevirt.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: DataVolume
    name: test-disk1
    uid: e0d58db7-9071-43b1-bb0f-305eea4ce7f2
  resourceVersion: "3669067"
  selfLink: /api/v1/namespaces/default/persistentvolumeclaims/test-disk1
  uid: 8209663d-848b-43ff-a0d1-36d52fb9d3ff
  - ReadWriteMany
      storage: 2Gi
  storageClassName: rook-ceph-block
  volumeMode: Block
  volumeName: pvc-8209663d-848b-43ff-a0d1-36d52fb9d3ff
  - ReadWriteMany
    storage: 2Gi
  phase: Bound

Comment 10 errata-xmlrpc 2020-01-23 11:06:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.