Bug 2064936 - Migration of vm from VMware reports pvc not large enough
Summary: Migration of vm from VMware reports pvc not large enough
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 4.9.3
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ---
: 4.9.4
Assignee: Bartosz Rybacki
QA Contact: dalia
URL:
Whiteboard:
: 2059057 (view as bug list)
Depends On:
Blocks: 2066712
TreeView+ depends on / blocked
 
Reported: 2022-03-16 21:04 UTC by Luke Stanton
Modified: 2022-09-14 19:29 UTC (History)
13 users (show)

Fixed In Version: v4.9.4-5
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2066712 (view as bug list)
Environment:
Last Closed: 2022-09-14 19:29:09 UTC
Target Upstream Version:
Embargoed:
istein: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt containerized-data-importer pull 2195 0 None Merged Do not factor fs overhead into available space during validation 2022-03-22 11:10:51 UTC
Github kubevirt containerized-data-importer pull 2199 0 None Merged [release-v1.38] Do not factor fs overhead into available space during validation 2022-03-22 14:31:04 UTC
Red Hat Knowledge Base (Solution) 6768211 0 None None None 2022-03-31 15:51:43 UTC
Red Hat Product Errata RHSA-2022:6526 0 None None None 2022-09-14 19:29:25 UTC

Description Luke Stanton 2022-03-16 21:04:17 UTC
Description of problem:

When attempting to migrate RH 8.4 VM from VMware using the MTV, migration fails with an error reporting that the PVC it created is not large enough to support migration:

-----------------------------------
# virtctl image-upload dv rhel-8...
PVC abc/rhel-8 not found 
DataVolume abc/rhel-8 created
Waiting for PVC rhel-8 upload pod to be ready...
Pod now ready
Uploading data to https://*****.com

unexpected return value 400, Saving stream failed: Virtual image size abc is larger than available size xyz (PVC size abc, reserved overhead 0.055000%). A larger PVC is required.
-----------------------------------

The work-around is to manually grow the created PVC to 2x its size, just following the start of the migration plan. This allows the migration to "complete". The VM was migrated in an off state and guest agent was shown as installing.

VM was started, then qemu-guest-agent installed.



Version-Release number of selected component (if applicable):

OCP 4.9.23
MTV 2.2



How reproducible: Consistently


Actual results: Migration fails with insufficient storage error


Expected results: Migration would succeed

Comment 2 Jan Safranek 2022-03-17 09:19:36 UTC
> unexpected return value 400, Saving stream failed: Virtual image size 32212254720 is larger than available size 28254549012 (PVC size 32212254720, reserved overhead 0.055000%). A larger PVC is required.

As you can see, a 30 GiB PVC has only ~26.3 GiB usable space, which is... not so much. It seems that Portworx needs more than 0.055% overhead. Anyway, I don't see any issue on OCP side, all volumes were provisioned and mounted to the right Pods.

I a moving it to migration toolkit to make sure they did the overhead calculation right, I don't know where this 0.055% comes from. Perhaps they can document that some storage backends need more overhead + a way how to configure it.

Comment 3 Adam Litke 2022-03-17 12:37:33 UTC
This is related to filesystem overhead on the CNV import side so I am moving the bug into the CNV Product.

Comment 8 Adam Litke 2022-03-31 15:51:44 UTC
*** Bug 2059057 has been marked as a duplicate of this bug. ***

Comment 11 dalia 2022-04-27 13:09:33 UTC
verified on cnv 4.9.4.

Comment 14 errata-xmlrpc 2022-09-14 19:29:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Virtualization 4.11.0 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:6526


Note You need to log in before you can comment on or make changes to this bug.