Description of problem: The cluster is having cluster proxy configured as below: ~~~ apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Proxy .... .... spec: trustedCA: name: custom-ca <<< ~~~ However, the custom-ca was only created in openshit-config namespace by following https://docs.openshift.com/container-platform/4.10/networking/enable-cluster-wide-proxy.html. However, while importing the disk, the importer pod is in ContainerCreating status and we have the below events: ~~~ 12m Warning FailedMount pod/importer-rhel7-sophisticated-parrotfish MountVolume.SetUp failed for volume "cdi-proxy-cert-vol" : configmap "custom-ca" not found ~~~ This is because the custom-ca is not available in the namespace where we are importing the disk. The user has to manually copy the ConfigMap custom-ca to the namespace for the import to work. Version-Release number of selected component (if applicable): OpenShift Virtualization 4.10.3 How reproducible: 100 % Steps to Reproduce: 1. Add spec.trustedCA in the cluster-wide proxy configuration. 2. Create the ConfigMap that contains CA certificates in openshift-config namespace. 3. Try to import a image in namespace other than openshift-config. The importer pod will be stuck in `ContainerCreating` status. Actual results: Importer pod is failing to start with error "MountVolume.SetUp failed for volume "cdi-proxy-cert-vol" : configmap "custom-ca" not found" Expected results: Since the error is creating confusion, it would be ideal if the ConfigMap is automatically copied to the namespace where the user is trying to import the VM/disk. If not, I think we should mention this is the documentation to copy the ConfigMap manually. Additional info:
We had discussed this at some point, and come up with a plan to address the issue, just never got around to it. The idea is basically what you said, have the CDI controller automatically copy the config map into the namespace we are going to import in. Give it a random name, and make sure the config map is owned by the importer pod. That way once the importer pod is removed, the config map goes with it. This should not be too hard to implement.
Alexander, do you plan to fix it in 4.10.6?
We have a plan on how to fix it, just haven't scheduled time to do it. So no plans for 4.10.6
Adam, Alexander, it's fixed on 4.12. Backport for 4.11 is pending for z-stream release. Any reason to backport it to 4.10 as well (it's a bit harder as only part of the PR is relevant).
I don't think so, I have no one requesting a backport to 4.10
Arnon, there seems to be a missing backport to 1.43. Please create one and link to this BZ. In the meantime I am moving this back to assigned.
We decided to fix this for 4.11 and 4.12 (not 4.10). Retargeting.
Verified on CNV v4.12.0-769, import succeed when proxy is set $ oc get pod NAME READY STATUS RESTARTS AGE importer-fedora 1/1 Running 0 6s $ oc get dv NAME PHASE PROGRESS RESTARTS AGE fedora ImportInProgress 11.15% 13s $ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE fedora Bound pvc-9e6987bf-e7f9-4704-b3d7-0bdae9fb0a6c 149Gi RWO hostpath-csi-basic 83s
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Virtualization 4.12.0 Images security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:0408