Bug 1876559 - Upload PVC with virtctl and specify any StorageClass results with default StorageClass for the pvc
Summary: Upload PVC with virtctl and specify any StorageClass results with default Sto...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 2.5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 2.5.0
Assignee: Alex Kalenyuk
QA Contact: dalia
URL:
Whiteboard:
Depends On:
Blocks: 1877341
TreeView+ depends on / blocked
 
Reported: 2020-09-07 13:18 UTC by dalia
Modified: 2020-11-17 13:24 UTC (History)
7 users (show)

Fixed In Version: kubevirt-2.5.0-74.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1877341 (view as bug list)
Environment:
Last Closed: 2020-11-17 13:24:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt kubevirt pull 4155 0 None closed Fix virtctl image-upload ignoring custom storage class argument 2020-11-16 02:20:21 UTC
Red Hat Product Errata RHEA-2020:5127 0 None None None 2020-11-17 13:24:41 UTC

Description dalia 2020-09-07 13:18:54 UTC
Description of problem:
When using virtctl image-upload to create DV/PVC the PVC will be created with default StorageClass even if another storage class has been specified in the command.

Version-Release number of selected component (if applicable):
virtctl v0.30.6

How reproducible:
100%


Steps to Reproduce:

1.Download an image you want to use

2. Check default StorageClass
   $ oc get sc
NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
hostpath-provisioner          kubevirt.io/hostpath-provisioner        Delete          WaitForFirstConsumer   false                  5d3h
local-block                   kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false                  5d3h
nfs                           kubernetes.io/no-provisioner            Delete          Immediate              false                  5d3h
ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   5d3h
ocs-storagecluster-ceph-rgw   openshift-storage.ceph.rook.io/bucket   Delete          Immediate              false                  5d3h
ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   5d3h
openshift-storage.noobaa.io   openshift-storage.noobaa.io/obc         Delete          Immediate              false                  5d3h
standard (default)            kubernetes.io/cinder   
 
2.Upload image with a specific StorageClass that is not the default - using virtctl:
   virtctl image-upload pvc hpp-image --size=18Gi --image-path=./Fedora-Cloud-Base-30-1.2.x86_64.qcow2 --access-mode=ReadWriteOnce --storage-class=hostpath-provisioner --insecure


Actual results:

PVC and scratch PVC created, storage class of the PVC is standard (default SC in this case) instead of HPP
 
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
hpp-image           Bound    pvc-50777267-7eb8-49cb-a8a0-fb40b3c9fd9c   18Gi       RWO            standard               43s
hpp-image-scratch   Bound    pvc-48bad3d8-a561-46de-938d-44c315c12a31   48Gi       RWO            hostpath-provisioner   42s


Expected results:
PVC hpp-image should be with HPP StorageClass (the scratch SC configured as HPP in any case)  

Additional info:

$ oc logs cdi-upload-hpp-image 
I0907 12:02:35.769369       1 uploadserver.go:63] Upload destination: /data/disk.img
I0907 12:02:35.769465       1 uploadserver.go:65] Running server on 0.0.0.0:8443
I0907 12:02:42.474110       1 uploadserver.go:278] Content type header is ""
I0907 12:02:42.474171       1 data-processor.go:277] Calculating available size
I0907 12:02:42.474242       1 data-processor.go:289] Checking out file system volume size.
I0907 12:02:42.474299       1 data-processor.go:297] Request image size not empty.
I0907 12:02:42.474347       1 data-processor.go:302] Target size 18826846208.
I0907 12:02:42.474444       1 data-processor.go:206] New phase: TransferScratch
I0907 12:02:42.475193       1 util.go:161] Writing data...
I0907 12:02:48.215547       1 data-processor.go:206] New phase: Pause
I0907 12:02:48.215625       1 uploadserver.go:309] Returning success to caller, continue processing in background
I0907 12:02:48.215815       1 data-processor.go:151] Resuming processing at phase Process
I0907 12:02:48.215883       1 data-processor.go:206] New phase: Convert
I0907 12:02:48.215898       1 data-processor.go:212] Validating image


$ oc describe pvc hpp-image 
Name:          hpp-image
Namespace:     default
StorageClass:  standard
Status:        Bound
Volume:        pvc-82171edc-c804-405b-bfc7-d42a17b4e3ba
Labels:        <none>
Annotations:   cdi.kubevirt.io/storage.condition.bound: true
               cdi.kubevirt.io/storage.condition.bound.message: 
               cdi.kubevirt.io/storage.condition.bound.reason: 
               cdi.kubevirt.io/storage.condition.running: false
               cdi.kubevirt.io/storage.condition.running.message: Upload Complete
               cdi.kubevirt.io/storage.condition.running.reason: Completed
               cdi.kubevirt.io/storage.pod.phase: Succeeded
               cdi.kubevirt.io/storage.pod.ready: false
               cdi.kubevirt.io/storage.upload.target: 
               cdi.kubevirt.io/storage.uploadPodName: cdi-upload-hpp-image
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder
               volume.kubernetes.io/selected-node: dafrank241-6958v-worker-grmzp
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      18Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type    Reason                 Age                  From                         Message
  ----    ------                 ----                 ----                         -------
  Normal  WaitForFirstConsumer   109s (x3 over 109s)  persistentvolume-controller  waiting for first consumer to be created before binding
  Normal  ProvisioningSucceeded  108s                 persistentvolume-controller  Successfully provisioned volume pvc-82171edc-c804-405b-bfc7-d42a17b4e3ba using kubernetes.io/cinder
  Normal  UploadSucceeded        50s                  upload-controller            Upload Successful

Comment 1 Alex Kalenyuk 2020-09-07 14:38:17 UTC
Is it possible that in this merged PR, the 'custom' --storage-class that is passed by virtctl user,
doesn't get processed in "func createPVCSpec", and that is why the fallback is always to default SC?

Link:
https://github.com/kubevirt/kubevirt/pull/3585/files#diff-5193b6b6b04d7c6e897bb0c241ad8bfe

Comment 2 Natalie Gavrielov 2020-09-09 12:20:32 UTC
Do you see the same behaviour for data volumes?

Comment 3 dalia 2020-09-09 16:20:34 UTC
try DV creation example:

1. Download an image
2. Check which class define as default:
    $ oc get sc
NAME                             PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
hostpath-provisioner (default)   kubevirt.io/hostpath-provisioner        Delete          WaitForFirstConsumer   false                  7d7h
local-block                      kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false                  7d7h
nfs                              kubernetes.io/no-provisioner            Delete          Immediate              false                  7d7h
ocs-storagecluster-ceph-rbd      openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   7d6h
ocs-storagecluster-ceph-rgw      openshift-storage.ceph.rook.io/bucket   Delete          Immediate              false                  7d6h
ocs-storagecluster-cephfs        openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   7d6h
openshift-storage.noobaa.io      openshift-storage.noobaa.io/obc         Delete          Immediate              false                  7d6h
standard                         kubernetes.io/cinder                    Delete          WaitForFirstConsumer   true                   7d8h

3. Try to create a DV with storage class that is not as default:
    $ virtctl image-upload dv upload-dv --size=18Gi --image-path=./Fedora-Cloud-Base-30-1.2.x86_64.qcow2 --access-mode=ReadWriteOnce --storage-class=ocs-storagecluster-ceph-rbd --insecure

4. Check PVC & DV that been created:
    $ oc get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
upload-dv   Bound    pvc-4d08f861-4d87-46c8-82b1-d4a1595f198e   48Gi       RWO            hostpath-provisioner   98s
    $ oc get dv
NAME        PHASE       PROGRESS   RESTARTS   AGE
upload-dv   Succeeded   N/A        0          4m8s


5. PVC created with default storage class - HPP instead of OCS as specify in the virtctl command

======================================================

$ oc describe dv upload-dv 
Name:         upload-dv
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  cdi.kubevirt.io/v1alpha1
Kind:         DataVolume
Metadata:
  Creation Timestamp:  2020-09-09T16:16:02Z
  Generation:          32
  Managed Fields:
    API Version:  cdi.kubevirt.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:spec:
        .:
        f:pvc:
          .:
          f:accessModes:
          f:resources:
            .:
            f:requests:
              .:
              f:storage:
        f:source:
          .:
          f:upload:
      f:status:
    Manager:      virtctl
    Operation:    Update
    Time:         2020-09-09T16:16:02Z
    API Version:  cdi.kubevirt.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        f:conditions:
        f:phase:
        f:progress:
        f:restartCount:
    Manager:         virt-cdi-controller
    Operation:       Update
    Time:            2020-09-09T16:16:58Z
  Resource Version:  7542092
  Self Link:         /apis/cdi.kubevirt.io/v1alpha1/namespaces/default/datavolumes/upload-dv
  UID:               60b183ad-898a-4e36-843f-b3be0baddd48
Spec:
  Pvc:
    Access Modes:
      ReadWriteOnce
    Resources:
      Requests:
        Storage:  18Gi
  Source:
    Upload:
Status:
  Conditions:
    Last Heart Beat Time:  2020-09-09T16:16:03Z
    Last Transition Time:  2020-09-09T16:16:03Z
    Message:               PVC upload-dv Bound
    Reason:                Bound
    Status:                True
    Type:                  Bound
    Last Heart Beat Time:  2020-09-09T16:16:58Z
    Last Transition Time:  2020-09-09T16:16:58Z
    Status:                True
    Type:                  Ready
    Last Heart Beat Time:  2020-09-09T16:16:58Z
    Last Transition Time:  2020-09-09T16:16:58Z
    Message:               Upload Complete
    Reason:                Completed
    Status:                False
    Type:                  Running
  Phase:                   Succeeded
  Progress:                N/A
  Restart Count:           0
Events:
  Type    Reason           Age    From                   Message
  ----    ------           ----   ----                   -------
  Normal  Pending          2m26s  datavolume-controller  PVC upload-dv Pending
  Normal  Bound            2m26s  datavolume-controller  PVC upload-dv Bound
  Normal  UploadReady      2m17s  datavolume-controller  Upload into upload-dv ready
  Normal  UploadSucceeded  91s    datavolume-controller  Successfully uploaded into upload-dv



$ oc logs cdi-upload-upload-dv 
I0909 16:16:07.446033       1 uploadserver.go:63] Upload destination: /data/disk.img
I0909 16:16:07.446151       1 uploadserver.go:65] Running server on 0.0.0.0:8443
I0909 16:16:13.154760       1 uploadserver.go:278] Content type header is ""
I0909 16:16:13.154803       1 data-processor.go:277] Calculating available size
I0909 16:16:13.154962       1 data-processor.go:289] Checking out file system volume size.
I0909 16:16:13.155009       1 data-processor.go:297] Request image size not empty.
I0909 16:16:13.155049       1 data-processor.go:302] Target size 18Gi.
I0909 16:16:13.155097       1 data-processor.go:206] New phase: TransferScratch
I0909 16:16:13.155425       1 util.go:161] Writing data...
I0909 16:16:19.292689       1 data-processor.go:206] New phase: Pause
I0909 16:16:19.292746       1 uploadserver.go:309] Returning success to caller, continue processing in background
I0909 16:16:19.292878       1 data-processor.go:151] Resuming processing at phase Process
I0909 16:16:19.292895       1 data-processor.go:206] New phase: Convert
I0909 16:16:19.292904       1 data-processor.go:212] Validating image

Comment 4 Adam Litke 2020-09-16 13:01:04 UTC
Alex can you please backport the PR to release-0.30? and add that pr link to Bug 1877341?

Comment 8 Adam Litke 2020-10-05 19:55:03 UTC
Waiting on a new d/s build from kubevirt.

Comment 9 Alex Kalenyuk 2020-10-06 08:33:11 UTC
Verified on virtctl-2.5.0-74.el7
HCO:[v2.5.0-270]
 HCO image: registry.redhat.io/container-native-virtualization/hyperconverged-cluster-operator@sha256:1d2ee6515d5b669d59dce162b9ab9ac7e523bcd052c16ef0be2bcd2614e399b2
CSV creation time: 2020-10-02 10:28:06
KubeVirt v0.34.0-rc.0-6-gad89f92
CDI v1.23.5

Comment 12 errata-xmlrpc 2020-11-17 13:24:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Virtualization 2.5.0 Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:5127


Note You need to log in before you can comment on or make changes to this bug.