Bug 2066956

Summary: Automatic Importing of Boot Sources fails with "A larger PVC is required"
Product: Container Native Virtualization (CNV) Reporter: Chandler Wilkerson <cwilkers>
Component: StorageAssignee: Dominik Holler <dholler>
Status: CLOSED DUPLICATE QA Contact: Geetika Kapoor <gkapoor>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.10.0CC: alitke, awels, cnv-qe-bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-03-23 14:31:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Chandler Wilkerson 2022-03-22 20:37:50 UTC
Description of problem:
On a fresh install of 4.10/CNV 4.10 using trident, PVCs are created before Trident is installed (due to need for CNV operators before NNCP to reach storage network)

When Trident installs, all boot source import pods go into CrashLoopBackOff with messages like: (for importer-centos-stream8-178796a2fbe3):

    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Message:      Unable to process data: Unable to convert source data to target format: Virtual image size 10737418240 is larger than available size 10737191485 (PVC size 10737418240, reserved overhead 0.055000%). A larger PVC is required.
      Exit Code:    1


Version-Release number of selected component (if applicable):

4.10.0

How reproducible:

First attempt to install, likely always.

Steps to Reproduce:
1. Install CNV 4.10 without a default storage class
2. Install Trident
3. Observe import pods in openshift-virtualization-os-images NS

Actual results:
All importers in CrashLoopBackOff

Expected results:
Imports start once PVCs bind.

Additional info:

Comment 1 Chandler Wilkerson 2022-03-22 21:04:56 UTC
Per @awels this looks like https://github.com/kubevirt/containerized-data-importer/pull/2195

where overhead is double-calculated unnecessarily. Work-around:


oc annotate --overwrite -n openshift-cnv hco kubevirt-hyperconverged 'containerizeddataimporter.kubevirt.io/jsonpatch=[{"op": "add", "path": "/spec/config/filesystemOverhead", "value": {}}, {"op": "add", "path": "/spec/config/filesystemOverhead/global", "value": "0.0"}]'

This works if you kill the existing crashloopbackoff pods.

Comment 2 Alexander Wels 2022-03-23 12:05:23 UTC
The workaround doesn't work for all provisioners. For instance portworks doesn't work because it does have a certain overhead that causes the actual available size to not be large enough. The fix in the PR resolves that.

Comment 3 Chandler Wilkerson 2022-03-23 12:40:23 UTC
The workaround also triggers an alert:

unsafe modification for the containerizeddataimporter.kubevirt.io/jsonpatch annotation in the HyperConverged resource.

Comment 4 Adam Litke 2022-03-23 14:31:19 UTC

*** This bug has been marked as a duplicate of bug 2066712 ***