Bug 1877341 - Upload PVC with virtctl and specify any StorageClass results with default StorageClass for the pvc
Summary: Upload PVC with virtctl and specify any StorageClass results with default Sto...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 2.4.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 2.4.2
Assignee: Adam Litke
QA Contact: dalia
URL:
Whiteboard:
Depends On: 1876559
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-09 12:23 UTC by Natalie Gavrielov
Modified: 2020-09-17 11:56 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1876559
Environment:
Last Closed: 2020-09-17 11:56:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Natalie Gavrielov 2020-09-09 12:23:30 UTC
+++ This bug was initially created as a clone of Bug #1876559 +++

Description of problem:
When using virtctl image-upload to create DV/PVC the PVC will be created with default StorageClass even if another storage class has been specified in the command.

Version-Release number of selected component (if applicable):
virtctl v0.30.6

How reproducible:
100%


Steps to Reproduce:

1.Download an image you want to use

2. Check default StorageClass
   $ oc get sc
NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
hostpath-provisioner          kubevirt.io/hostpath-provisioner        Delete          WaitForFirstConsumer   false                  5d3h
local-block                   kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false                  5d3h
nfs                           kubernetes.io/no-provisioner            Delete          Immediate              false                  5d3h
ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   5d3h
ocs-storagecluster-ceph-rgw   openshift-storage.ceph.rook.io/bucket   Delete          Immediate              false                  5d3h
ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   5d3h
openshift-storage.noobaa.io   openshift-storage.noobaa.io/obc         Delete          Immediate              false                  5d3h
standard (default)            kubernetes.io/cinder   
 
2.Upload image with a specific StorageClass that is not the default - using virtctl:
   virtctl image-upload pvc hpp-image --size=18Gi --image-path=./Fedora-Cloud-Base-30-1.2.x86_64.qcow2 --access-mode=ReadWriteOnce --storage-class=hostpath-provisioner --insecure


Actual results:

PVC and scratch PVC created, storage class of the PVC is standard (default SC in this case) instead of HPP
 
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
hpp-image           Bound    pvc-50777267-7eb8-49cb-a8a0-fb40b3c9fd9c   18Gi       RWO            standard               43s
hpp-image-scratch   Bound    pvc-48bad3d8-a561-46de-938d-44c315c12a31   48Gi       RWO            hostpath-provisioner   42s


Expected results:
PVC hpp-image should be with HPP StorageClass (the scratch SC configured as HPP in any case)  

Additional info:

$ oc logs cdi-upload-hpp-image 
I0907 12:02:35.769369       1 uploadserver.go:63] Upload destination: /data/disk.img
I0907 12:02:35.769465       1 uploadserver.go:65] Running server on 0.0.0.0:8443
I0907 12:02:42.474110       1 uploadserver.go:278] Content type header is ""
I0907 12:02:42.474171       1 data-processor.go:277] Calculating available size
I0907 12:02:42.474242       1 data-processor.go:289] Checking out file system volume size.
I0907 12:02:42.474299       1 data-processor.go:297] Request image size not empty.
I0907 12:02:42.474347       1 data-processor.go:302] Target size 18826846208.
I0907 12:02:42.474444       1 data-processor.go:206] New phase: TransferScratch
I0907 12:02:42.475193       1 util.go:161] Writing data...
I0907 12:02:48.215547       1 data-processor.go:206] New phase: Pause
I0907 12:02:48.215625       1 uploadserver.go:309] Returning success to caller, continue processing in background
I0907 12:02:48.215815       1 data-processor.go:151] Resuming processing at phase Process
I0907 12:02:48.215883       1 data-processor.go:206] New phase: Convert
I0907 12:02:48.215898       1 data-processor.go:212] Validating image


$ oc describe pvc hpp-image 
Name:          hpp-image
Namespace:     default
StorageClass:  standard
Status:        Bound
Volume:        pvc-82171edc-c804-405b-bfc7-d42a17b4e3ba
Labels:        <none>
Annotations:   cdi.kubevirt.io/storage.condition.bound: true
               cdi.kubevirt.io/storage.condition.bound.message: 
               cdi.kubevirt.io/storage.condition.bound.reason: 
               cdi.kubevirt.io/storage.condition.running: false
               cdi.kubevirt.io/storage.condition.running.message: Upload Complete
               cdi.kubevirt.io/storage.condition.running.reason: Completed
               cdi.kubevirt.io/storage.pod.phase: Succeeded
               cdi.kubevirt.io/storage.pod.ready: false
               cdi.kubevirt.io/storage.upload.target: 
               cdi.kubevirt.io/storage.uploadPodName: cdi-upload-hpp-image
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder
               volume.kubernetes.io/selected-node: dafrank241-6958v-worker-grmzp
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      18Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type    Reason                 Age                  From                         Message
  ----    ------                 ----                 ----                         -------
  Normal  WaitForFirstConsumer   109s (x3 over 109s)  persistentvolume-controller  waiting for first consumer to be created before binding
  Normal  ProvisioningSucceeded  108s                 persistentvolume-controller  Successfully provisioned volume pvc-82171edc-c804-405b-bfc7-d42a17b4e3ba using kubernetes.io/cinder
  Normal  UploadSucceeded        50s                  upload-controller            Upload Successful

--- Additional comment from Alex Kalenyuk on 2020-09-07 14:38:17 UTC ---

Is it possible that in this merged PR, the 'custom' --storage-class that is passed by virtctl user,
doesn't get processed in "func createPVCSpec", and that is why the fallback is always to default SC?

Link:
https://github.com/kubevirt/kubevirt/pull/3585/files#diff-5193b6b6b04d7c6e897bb0c241ad8bfe

--- Additional comment from Natalie Gavrielov on 2020-09-09 12:20:32 UTC ---

Do you see the same behaviour for data volumes?

Comment 1 Alex Kalenyuk 2020-09-17 11:56:15 UTC
Closing as this behavior only got introduced recently in kubevirt v0.33 which corresponds to 2.5 and not 2.4.X
(we were getting latest virtctl on downstream envs due to error)


Note You need to log in before you can comment on or make changes to this bug.