Bug 2066956 - Automatic Importing of Boot Sources fails with "A larger PVC is required"
Summary: Automatic Importing of Boot Sources fails with "A larger PVC is required"
Keywords:
Status: CLOSED DUPLICATE of bug 2066712
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 4.10.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Dominik Holler
QA Contact: Geetika Kapoor
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-03-22 20:37 UTC by Chandler Wilkerson
Modified: 2022-03-23 14:31 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-23 14:31:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Chandler Wilkerson 2022-03-22 20:37:50 UTC
Description of problem:
On a fresh install of 4.10/CNV 4.10 using trident, PVCs are created before Trident is installed (due to need for CNV operators before NNCP to reach storage network)

When Trident installs, all boot source import pods go into CrashLoopBackOff with messages like: (for importer-centos-stream8-178796a2fbe3):

    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Message:      Unable to process data: Unable to convert source data to target format: Virtual image size 10737418240 is larger than available size 10737191485 (PVC size 10737418240, reserved overhead 0.055000%). A larger PVC is required.
      Exit Code:    1


Version-Release number of selected component (if applicable):

4.10.0

How reproducible:

First attempt to install, likely always.

Steps to Reproduce:
1. Install CNV 4.10 without a default storage class
2. Install Trident
3. Observe import pods in openshift-virtualization-os-images NS

Actual results:
All importers in CrashLoopBackOff

Expected results:
Imports start once PVCs bind.

Additional info:

Comment 1 Chandler Wilkerson 2022-03-22 21:04:56 UTC
Per @awels this looks like https://github.com/kubevirt/containerized-data-importer/pull/2195

where overhead is double-calculated unnecessarily. Work-around:


oc annotate --overwrite -n openshift-cnv hco kubevirt-hyperconverged 'containerizeddataimporter.kubevirt.io/jsonpatch=[{"op": "add", "path": "/spec/config/filesystemOverhead", "value": {}}, {"op": "add", "path": "/spec/config/filesystemOverhead/global", "value": "0.0"}]'

This works if you kill the existing crashloopbackoff pods.

Comment 2 Alexander Wels 2022-03-23 12:05:23 UTC
The workaround doesn't work for all provisioners. For instance portworks doesn't work because it does have a certain overhead that causes the actual available size to not be large enough. The fix in the PR resolves that.

Comment 3 Chandler Wilkerson 2022-03-23 12:40:23 UTC
The workaround also triggers an alert:

unsafe modification for the containerizeddataimporter.kubevirt.io/jsonpatch annotation in the HyperConverged resource.

Comment 4 Adam Litke 2022-03-23 14:31:19 UTC

*** This bug has been marked as a duplicate of bug 2066712 ***


Note You need to log in before you can comment on or make changes to this bug.