Bug 2211571

Summary: Image-upload hangs when using block-volume if the underlying storage had volumemode=block
Product: Container Native Virtualization (CNV) Reporter: David Sedgmen <dsedgmen>
Component: StorageAssignee: Adam Litke <alitke>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Natalie Gavrielov <ngavrilo>
Severity: low Docs Contact:
Priority: unspecified    
Version: 4.12.3CC: yadu
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-06-21 12:49:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description David Sedgmen 2023-06-01 05:44:45 UTC
Description of problem:
Image-upload hangs when using block-volume if the underlying storage had volumemode=block because the scratch pvc is created with volumemode=filesystem

Version-Release number of selected component (if applicable):

virtctl version
Client Version: version.Info{GitVersion:"v0.58.1-57-gfa16ad5c8", GitCommit:"fa16ad5c8189c14e3adf4e757dbfbc68216af85f", GitTreeState:"clean", BuildDate:"2023-05-11T05:38:34Z", GoVersion:"go1.18.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{GitVersion:"v0.58.1-57-gfa16ad5c8", GitCommit:"fa16ad5c8189c14e3adf4e757dbfbc68216af85f", GitTreeState:"clean", BuildDate:"2023-05-11T05:40:21Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}

kubevirt-virtctl-4.12.3-1137.el8.x86_64
OpenShift version: 4.12.11

How reproducible:

Everytime

Steps to Reproduce:
1. Setup either iscsi persistent storage or nfs dynamic provisioner 
2. Try to create image dv with --block-volume

Actual results:
~~~
[kni@localhost iscsi]$ virtctl image-upload dv openstack-base-img4 -n openstack --size 50Gi --image-path=/var/lib/libvirt/images/rhel-8.4-x86_64-kvm.qcow2 --insecure --storage-class=iscsi-ssd2 --block-volume --access-mode ReadWriteOnce
PVC openstack/openstack-base-img4 not found 
DataVolume openstack/openstack-base-img4 created
Waiting for PVC openstack-base-img4 upload pod to be ready...
^Z
[1]+  Stopped                 virtctl image-upload dv openstack-base-img4 -n openstack --size 50Gi --image-path=/var/lib/libvirt/images/rhel-8.4-x86_64-kvm.qcow2 --insecure --storage-class=iscsi-ssd2 --block-volume --access-mode ReadWriteOnce
[kni@localhost iscsi]$ bg
[1]+ virtctl image-upload dv openstack-base-img4 -n openstack --size 50Gi --image-path=/var/lib/libvirt/images/rhel-8.4-x86_64-kvm.qcow2 --insecure --storage-class=iscsi-ssd2 --block-volume --access-mode ReadWriteOnce &
[kni@localhost iscsi]$ 
[kni@localhost iscsi]$ 
[kni@localhost iscsi]$ oc get pvc
NAME                          STATUS    VOLUME                                     CAPACITY      ACCESS MODES   STORAGECLASS       AGE
openstack-base-img            Bound     pvc-22c71b48-e6a9-47d0-8547-31e882791889   56811736720   RWX            nfs-client-share   5h22m
openstack-base-img2           Bound     iscsi-pv-04                                100Gi         RWO            iscsi-ssd          5h11m
openstack-base-img4           Bound     iscsi-pv-07                                100Gi         RWO            iscsi-ssd2         17s
openstack-base-img4-scratch   Pending                                                                           iscsi-ssd2         16s
openstackclient-cloud-admin   Bound     pvc-95ad1133-63ef-4ddc-8d90-0fbaf9f56a02   4G            RWO            nfs-client         26h
openstackclient-hosts         Bound     pvc-f49066ac-4824-46c1-81e4-ba181ec17ac3   1G            RWO            nfs-client         26h
openstackclient-kolla-src     Bound     pvc-894cf9ac-8b1b-4816-947f-cb2026e1b410   1G            RWO            nfs-client         26h

[kni@localhost iscsi]$ oc get pvc openstack-base-img4 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    cdi.kubevirt.io/storage.condition.bound: "false"
    cdi.kubevirt.io/storage.condition.bound.message: Claim Pending
    cdi.kubevirt.io/storage.condition.bound.reason: Claim Pending
    cdi.kubevirt.io/storage.contentType: kubevirt
    cdi.kubevirt.io/storage.deleteAfterCompletion: "true"
    cdi.kubevirt.io/storage.pod.phase: Pending
    cdi.kubevirt.io/storage.pod.ready: "false"
    cdi.kubevirt.io/storage.pod.restarts: "0"
    cdi.kubevirt.io/storage.preallocation.requested: "false"
    cdi.kubevirt.io/storage.upload.target: ""
    cdi.kubevirt.io/storage.uploadPodName: cdi-upload-openstack-base-img4
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
  creationTimestamp: "2023-06-01T05:40:03Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app: containerized-data-importer
    app.kubernetes.io/component: storage
    app.kubernetes.io/managed-by: cdi-controller
    app.kubernetes.io/part-of: hyperconverged-cluster
    app.kubernetes.io/version: 4.12.3
  name: openstack-base-img4
  namespace: openstack
  ownerReferences:
  - apiVersion: cdi.kubevirt.io/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: DataVolume
    name: openstack-base-img4
    uid: 8e88a64a-b67b-4838-b5db-298af0accf73
  resourceVersion: "41576012"
  uid: 9c80019d-6a31-439d-b16e-c789c14330a8
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: "53687091200"
  storageClassName: iscsi-ssd2
  volumeMode: Block
  volumeName: iscsi-pv-07
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 100Gi
  phase: Bound
[kni@localhost iscsi]$ oc get pvc openstack-base-img4-scratch -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: "2023-06-01T05:40:04Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app: containerized-data-importer
    app.kubernetes.io/component: storage
    app.kubernetes.io/managed-by: cdi-controller
  name: openstack-base-img4-scratch
  namespace: openstack
  ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: true
    controller: true
    kind: Pod
    name: cdi-upload-openstack-base-img4
    uid: 1b97b03d-76ec-41c6-a082-9d576d7b86b8
  resourceVersion: "41576007"
  uid: 024a4567-c798-40fd-a6de-a2a7174f2b68
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: "53687091200"
  storageClassName: iscsi-ssd2
  volumeMode: Filesystem
status:
  phase: Pending

[kni@localhost iscsi]$ oc get pv iscsi-pv-08 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  creationTimestamp: "2023-05-31T23:14:05Z"
  finalizers:
  - kubernetes.io/pv-protection
  name: iscsi-pv-08
  resourceVersion: "41260288"
  uid: 6f40a4cc-c08d-46fa-bf86-cfb7cdb1f9ac
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 100Gi
  iscsi:
    iqn: iqn.2003-01.org.linux-iscsi.dell-r640-018.x8664:sn.bc5015159bef
    iscsiInterface: default
    lun: 7
    targetPortal: 10.1.8.109:3260
  persistentVolumeReclaimPolicy: Retain
  storageClassName: iscsi-ssd2
  volumeMode: Block
status:
  phase: Available

~~~

Expected results:

For the scratch to use volumeMode: Block, to allow a different storage class to be used for the scratch or documented volumeMode: Block has to be undefined 

Additional info:

Comment 1 Yan Du 2023-06-14 12:24:42 UTC
Could please attach the events of oc describe pvc openstack-base-img4-scratch ?
Is it working when create a pvc with the iscsi-ssd2 storage class with Filesystem volume mode?
You can change the scratch space storage class by editing the hco CR to select the storage class which support the Filesystem volume mode.

Comment 2 David Sedgmen 2023-06-14 22:26:29 UTC
Hi 

Sorry the environment expired yesterday. 

It works when I have a SC which includes some PVs with Filesystem volume mode and some PVs with Block volume mode. 

# Defaults to volumeMode: Filesystem when undefined 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: iscsi-pv-02
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: iscsi-ssd
  iscsi:
     targetPortal: 10.1.8.109:3260
     iqn: iqn.2003-01.org.linux-iscsi.dell-r640-018.x8664:sn.bc5015159bef
     lun: 1

# volumeMode: Block
apiVersion: v1
kind: PersistentVolume
metadata:
  name: iscsi-pv-03
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: iscsi-ssd
  volumeMode: Block
  iscsi:
     targetPortal: 10.1.8.109:3260
     iqn: iqn.2003-01.org.linux-iscsi.dell-r640-018.x8664:sn.bc5015159bef
     lun: 2

Comment 3 Yan Du 2023-06-21 12:49:54 UTC

*** This bug has been marked as a duplicate of bug 2211568 ***

Comment 5 Yan Du 2023-06-25 11:24:28 UTC
Hi, David
Although the behavior is different, but we think it is caused by incomplete storage profile, same root cause.
I can move it to insufficient data if you disagree with duplicate. It is better to have storage class yaml and storage profile yaml to have future debugging, and feel free to reopen it once you get more information. Thanks