Bug 2034544 - disk.img file is resized up for HPP and NFS storage classes
Summary: disk.img file is resized up for HPP and NFS storage classes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 4.10.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.10.0
Assignee: Maya Rashish
QA Contact: Jenia Peimer
URL:
Whiteboard:
: 2040300 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-12-21 09:37 UTC by Jenia Peimer
Modified: 2022-03-16 16:05 UTC (History)
6 users (show)

Fixed In Version: CNV v4.10.0-598
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-16 16:05:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt kubevirt pull 6999 0 None Merged Limit ourselves to expanding to min(spec.request, status.capacity) 2022-01-13 14:25:22 UTC
Github kubevirt kubevirt pull 7073 0 None Merged [release-0.49] Limit ourselves to expanding to min(spec.request, status.capacity) 2022-01-19 09:46:51 UTC
Red Hat Product Errata RHSA-2022:0947 0 None None None 2022-03-16 16:05:55 UTC

Description Jenia Peimer 2021-12-21 09:37:13 UTC
Description of problem:
disk.img file should be of the requested size, but for some reason, it's being resized up for HPP and NFS storage classes. 

Version-Release number of selected component (if applicable):
4.10

How reproducible:
Always

Steps to Reproduce:
1. Create a VM using HPP storage class with requested storage 100Mi (or a VM using NFS storage class)
2. See the disk.img size 

Actual results:
disk.img size is much larger than the requested size

Expected results:
disk.img size should be the same or a bit smaller than the requested size


Additional info:

How to get disk.img size:

HPP disk.img size: 64.26Gi, but expected to be 100Mi

$ oc get vmi -A
NAME                            PHASE     IP             NODENAME                         READY
vm-cirros-datavolume-hpp        Running   ************   c01-jj410-kcw5p-worker-0-nxcxh   True

$ oc debug node/c01-jj410-kcw5p-worker-0-nxcxh
sh-4.4# chroot /host
sh-4.4# stat var/hpvolumes/pvc-cbb443e4-350f-456a-8bbd-9e164caa51cb/disk.img | grep Size
  Size: 68998649856	Blocks: 4337592    IO Block: 4096   regular file
sh-4.4# Ctrl+D Ctrl+D


$ oc get pods -A | grep virt-launcher
virt-launcher-vm-cirros-datavolume-nfs-4kxs6                      1/1     Running  
virt-launcher-vm-cirros-datavolume-ocs-fs-ljlgx                   1/1     Running  
virt-launcher-vm-cirros-datavolume-hpp-w5fnn                      1/1     Running  

$ oc exec virt-launcher-vm-cirros-datavolume-nfs-ljlgx -- find . -name "disk.img"
./run/kubevirt-private/vmi-disks/datavolumevolume-nfs/disk.img


NFS disk.img size: 4.7Gi, but expected to be 100Mi

$ oc exec virt-launcher-vm-cirros-datavolume-nfs-4kxs6 -- stat run/kubevirt-private/vmi-disks/datavolumevolume-nfs/disk.img | grep Size
  Size: 5073430528	Blocks: 9844736    IO Block: 65536  regular file


OCS disk.img size: 94.5Mi, as expected (a little bit smaller than 100Mi)

$ oc exec virt-launcher-vm-cirros-datavolume-ocs-fs-ljlgx -- stat /run/kubevirt-private/vmi-disks/datavolumevolume-ocs-fs/disk.img | grep Size
  Size: 99090432  	Blocks: 160926     IO Block: 1024   regular file


One more thing to check for VM on HPP:
$ virtctl console vm-cirros-datavolume-hpp
$ dd if=/dev/zero of=file bs=1M
dd: writing 'file': File too large

Expected: 
dd: writing 'file': No space left on device


VM HPP

$ cat vm-hpp.yaml 
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  creationTimestamp: null
  labels:
    kubevirt.io/vm: vm-cirros-datavolume-hpp
  name: vm-cirros-datavolume-hpp
spec:
  dataVolumeTemplates:
  - metadata:
      creationTimestamp: null
      name: cirros-dv-simple-hpp
    spec:
      pvc:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 100Mi
        storageClassName: hostpath-provisioner
      source:
        http:
          url: http://.../cirros-0.4.0-x86_64-disk.qcow2
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/vm: vm-datavolume-hpp
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: datavolumevolume-hpp
        machine:
          type: ""
        resources:
          requests:
            memory: 64M
      terminationGracePeriodSeconds: 0
      volumes:
      - dataVolume:
          name: cirros-dv-simple-hpp
        name: datavolumevolume-hpp


NFS VM

$ cat vm-nfs.yaml 
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  creationTimestamp: null
  labels:
    kubevirt.io/vm: vm-cirros-datavolume-nfs
  name: vm-cirros-datavolume-nfs
spec:
  dataVolumeTemplates:
  - metadata:
      creationTimestamp: null
      name: cirros-dv-simple-nfs
    spec:
      pvc:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 100Mi
        storageClassName: nfs
      source:
        http:
          url: http://.../cirros-0.4.0-x86_64-disk.qcow2
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/vm: vm-datavolume-nfs
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: datavolumevolume-nfs
        machine:
          type: ""
        resources:
          requests:
            memory: 64M
      terminationGracePeriodSeconds: 0
      volumes:
      - dataVolume:
          name: cirros-dv-simple-nfs
        name: datavolumevolume-nfs

Comment 1 Yan Du 2021-12-22 13:21:54 UTC
Hi, Alex, could you please sit with Jenia to see what is going on?

Comment 4 Maya Rashish 2021-12-23 15:01:48 UTC
Would it be preferable to use min(pvc.status.requests, pvc.spec.requests) as the PVC size for kubevirt resize purposes?

Comment 5 Alexander Wels 2021-12-23 15:06:33 UTC
So CDI will resize the disk.img to pvc.spec.request. I am not entirely sure how pvc resizes are initiated, but if it is just updating the pvc.spec.request, then we need to try and honor the request size. We need to clearly made a distinction between pcs.spec.request size, and pvc.status.capacity (the capacity comes from the PV).

Comment 6 Maya Rashish 2021-12-26 12:56:48 UTC
Linked diff containing my proposed change.
https://github.com/kubevirt/kubevirt/pull/6999

Comment 7 Jenia Peimer 2022-01-13 14:19:15 UTC
*** Bug 2040300 has been marked as a duplicate of this bug. ***

Comment 8 Jenia Peimer 2022-01-23 09:57:53 UTC
Verified on CNV v4.10.0-605

HPP - disk.img size is 99090432 bytes = 94.5Mi, as expected

$ cat vm-hpp.yaml | grep storage 
            storage: 100Mi
        storageClassName: hostpath-provisioner

$ oc create -f vm-hpp.yaml 
virtualmachine.kubevirt.io/vm-cirros-datavolume-hpp created

$ oc get vmi -A
NAMESPACE   NAME                       AGE   PHASE     IP             NODENAME                            READY
default     vm-cirros-datavolume-hpp   36s   Running   ************   c01-jp410-fr-x449r-worker-0-8lr6n   True

$ oc debug node/c01-jp410-fr-x449r-worker-0-8lr6n
sh-4.4# chroot /host
sh-4.4# stat var/local-basic/csi/pvc-346983a2-f464-4ef2-88e8-4f6ee4b3261c/disk.img | grep Size
  Size: 99090432  	Blocks: 88080      IO Block: 4096   regular file


NFS - disk.img size is 99090432 bytes = 94.5Mi, as expected

$ cat vm-nfs.yaml | grep storage 
            storage: 100Mi
        storageClassName: nfs

$ oc create -f vm-nfs.yaml 
virtualmachine.kubevirt.io/vm-cirros-nfs created

$ oc get vmi -A
NAMESPACE   NAME                       AGE     PHASE     IP             NODENAME                            READY
default     vm-cirros-datavolume-hpp   9m23s   Running   ************   c01-jp410-fr-x449r-worker-0-8lr6n   True
default     vm-cirros-nfs              20s     Running   ************   c01-jp410-fr-x449r-worker-0-7qjzg   True

$ oc get pods -A | grep virt-launcher
default                                            virt-launcher-vm-cirros-datavolume-hpp-ggzdm                      1/1     Running     0             9m26s
default                                            virt-launcher-vm-cirros-nfs-h7sm6                                 1/1     Running     0             39s

$ oc exec virt-launcher-vm-cirros-nfs-h7sm6 -- stat run/kubevirt-private/vmi-disks/datavolumevolume-nfs/disk.img | grep Size
  Size: 99090432  	Blocks: 56032      IO Block: 65536  regular file


$ oc get pvc -A
NAMESPACE           NAME                                                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
default             cirros-dv-nfs                                                   Bound    nfs-pv-01                                  5Gi        RWO,RWX        nfs                           117m
default             cirros-dv-simple-hpp                                            Bound    pvc-346983a2-f464-4ef2-88e8-4f6ee4b3261c   79Gi       RWO            hostpath-provisioner          126m
openshift-cnv       hpp-pool-local-pvc-template-c01-jp410-fr-x449r-worker-0-7qjzg   Bound    pvc-350cded2-486b-41a8-b37c-023124d586bb   40Gi       RWO            ocs-storagecluster-ceph-rbd   41h
openshift-cnv       hpp-pool-local-pvc-template-c01-jp410-fr-x449r-worker-0-8lr6n   Bound    pvc-f3980f77-67b1-460e-a42b-8404b4d2f384   40Gi       RWO            ocs-storagecluster-ceph-rbd   41h
openshift-cnv       hpp-pool-local-pvc-template-c01-jp410-fr-x449r-worker-0-jmtjs   Bound    pvc-fa19b701-498d-4431-9d54-88b26b84f23f   40Gi       RWO            ocs-storagecluster-ceph-rbd   41h
openshift-storage   ocs-deviceset-0-data-0xqmfk                                     Bound    local-pv-d5364e03                          150Gi      RWO            local-block-ocs               43h
openshift-storage   ocs-deviceset-0-data-1b7srq                                     Bound    local-pv-e12b9d68                          150Gi      RWO            local-block-ocs               43h
openshift-storage   ocs-deviceset-0-data-29dsrj                                     Bound    local-pv-ebce5123                          150Gi      RWO            local-block-ocs               43h


$ oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                                         STORAGECLASS                  REASON   AGE
local-pv-d5364e03                          150Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-0-data-0xqmfk                                 local-block-ocs                        43h
local-pv-e12b9d68                          150Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-0-data-1b7srq                                 local-block-ocs                        43h
local-pv-ebce5123                          150Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-0-data-29dsrj                                 local-block-ocs                        43h
nfs-pv-01                                  5Gi        RWO,RWX        Retain           Bound       default/cirros-dv-nfs                                                         nfs                                    43h
nfs-pv-02                                  5Gi        RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-03                                  5Gi        RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-04                                  5Gi        RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-05                                  5Gi        RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-06                                  5Gi        RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-07                                  5Gi        RWO,RWX        Retain           Available                                                                                 nfs                                    59m
nfs-pv-08                                  5Gi        RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-09                                  5Gi        RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-10                                  5Gi        RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-11                                  25Gi       RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-12                                  25Gi       RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-13                                  25Gi       RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-14                                  25Gi       RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-15                                  25Gi       RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-16                                  25Gi       RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-17                                  70Gi       RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-18                                  70Gi       RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-19                                  70Gi       RWO,RWX        Retain           Available                                                                                 nfs                                    43h
nfs-pv-20                                  70Gi       RWO,RWX        Retain           Available                                                                                 nfs                                    43h
pvc-346983a2-f464-4ef2-88e8-4f6ee4b3261c   79Gi       RWO            Delete           Bound       default/cirros-dv-simple-hpp                                                  hostpath-provisioner                   129m
pvc-350cded2-486b-41a8-b37c-023124d586bb   40Gi       RWO            Delete           Bound       openshift-cnv/hpp-pool-local-pvc-template-c01-jp410-fr-x449r-worker-0-7qjzg   ocs-storagecluster-ceph-rbd            41h
pvc-f3980f77-67b1-460e-a42b-8404b4d2f384   40Gi       RWO            Delete           Bound       openshift-cnv/hpp-pool-local-pvc-template-c01-jp410-fr-x449r-worker-0-8lr6n   ocs-storagecluster-ceph-rbd            41h
pvc-fa19b701-498d-4431-9d54-88b26b84f23f   40Gi       RWO            Delete           Bound       openshift-cnv/hpp-pool-local-pvc-template-c01-jp410-fr-x449r-worker-0-jmtjs   ocs-storagecluster-ceph-rbd            41h

Comment 13 errata-xmlrpc 2022-03-16 16:05:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Virtualization 4.10.0 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0947


Note You need to log in before you can comment on or make changes to this bug.