Bug 2055546 - Image import is not working if only trustedCA is set in cluster proxy
Summary: Image import is not working if only trustedCA is set in cluster proxy
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 4.9.2
Hardware: All
OS: Linux
high
high
Target Milestone: ---
: 4.9.4
Assignee: Alexander Wels
QA Contact: Kevin Alon Goldblatt
URL:
Whiteboard:
Depends On: 2049800
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-02-17 08:22 UTC by nijin ashok
Modified: 2025-08-08 12:32 UTC (History)
6 users (show)

Fixed In Version: CNV v4.9.3-30
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-26 16:54:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 6954726 0 None None None 2022-05-16 06:04:50 UTC
Red Hat Product Errata RHEA-2022:1596 0 None None None 2022-04-26 16:54:46 UTC

Description nijin ashok 2022-02-17 08:22:49 UTC
Description of problem:

The trustedCA is set in cluster proxy.

~~~
oc get proxies cluster -o yaml

apiVersion: config.openshift.io/v1
kind: Proxy
metadata:
  creationTimestamp: "2021-10-13T14:59:39Z"
  generation: 7
  name: cluster
  resourceVersion: "162106804"
  uid: 888382f9-4f9f-460a-ae51-d5c651b056b2
spec:
  trustedCA:
    name: custom-ca      <<<<
status: {}
~~~

Once configured, the cdiconfig gets updated with this conf.

~~~
oc get cdiconfig config -o yaml |yq -y '.status.importProxy'
HTTPProxy: ''
HTTPSProxy: ''
noProxy: ''
trustedCAProxy: custom-ca <<<
~~~

After this, the import of the image fails while creating the imported pod with the below error.

~~~
{"level":"error","ts":1645085463.2661765,"logger":"controller-runtime.manager.controller.import-controller","msg":"Reconciler error","name":"rhel8-traditional-crab","namespace":"nijin-cnv","error":"Pod \"importer-rhel8-traditional-crab\" is invalid: spec.containers[0].volumeMounts[1].name: Not found: \"cdi-proxy-cert-vol\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/remote-source/app/vendor/github.com/go-logr/zapr/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:302\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:216\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:99"}
~~~

Version-Release number of selected component (if applicable):

~~~
oc get csv
NAME                                      DISPLAY                    VERSION   REPLACES                                  PHASE
kubevirt-hyperconverged-operator.v4.9.2   OpenShift Virtualization   4.9.2     kubevirt-hyperconverged-operator.v4.9.1   Succeeded

oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.18    True        False         24m     Cluster version is 4.9.18
~~

How reproducible:

100%


Steps to Reproduce:

1. Edit the cluster proxy and add only "trustedCA".
2. Try to create a dv to import the image. The import pod fails to get created with the above error.
3.

Actual results:

Image import is not working if only trustedCAProxy is set in cluster proxy

Expected results:

Import should work with trustedCAProxy configured.

Additional info:

Comment 2 nijin ashok 2022-02-17 12:00:05 UTC
Looks like the issue is because of difference in name in spec.containers[0].volumeMounts[1].name and pod.Spec.Volumes[].name. One has got cdi-proxy-cert-vol and other cdi-cert-vol.


~~~
1006 func makeImporterPodSpec(args *importerPodArgs) *corev1.Pod {
....
...
1132         if args.podEnvVar.certConfigMapProxy != "" {
1133                 vm := corev1.VolumeMount{
1134                         Name:      ProxyCertVolName,            <<<<<
1135                         MountPath: common.ImporterProxyCertDir,
1136                 }
1137                 pod.Spec.Containers[0].VolumeMounts = append(pod.Spec.Containers[0].VolumeMounts, vm)
1138                 pod.Spec.Volumes = append(pod.Spec.Volumes, createProxyConfigMapVolume(CertVolName, args.podEnvVar.certConfigMapProxy))  <<<<<
1139         }
~~~

And it's already fixed upstream [1] and I cannot reproduce the issue with CDI from the current main branch.

[1] https://github.com/kubevirt/containerized-data-importer/pull/2132/commits/100bf4d09f267f852a0365d353941d97a48ee064#diff-868af4b730094e9180f78af4cc069ae6cecf7077a46c186553f331dccbd32993L1163

Can we backport this to 4.9?

Comment 3 nijin ashok 2022-02-17 12:16:17 UTC
(In reply to nijin ashok from comment #2)
 
> Can we backport this to 4.9?

Sorry, I didn't see this https://github.com/kubevirt/containerized-data-importer/pull/2138.

Comment 4 Kevin Alon Goldblatt 2022-02-22 15:21:39 UTC
Verified with the following code:
---------------------------------------------
oc version
Client Version: 4.10.0-202202160023.p0.gf93da17.assembly.stream-f93da17
Server Version: 4.10.0-rc.3
Kubernetes Version: v1.23.3+2e8bad7
[cnv-qe-jenkins@stg10-kevin-pdn2w-executor kev]$ oc get csv -n openshift-cnv
NAME                                       DISPLAY                    VERSION   REPLACES                                  PHASE
kubevirt-hyperconverged-operator.v4.10.0   OpenShift Virtualization   4.10.0    kubevirt-hyperconverged-operator.v4.9.2   Succeeded


Verified with the following scenario:
---------------------------------------------
1.Edited the cluster proxy
apiVersion: config.openshift.io/v1
kind: Proxy
metadata:
  creationTimestamp: "2022-02-22T00:48:43Z"
  generation: 3
  name: cluster
  resourceVersion: "1266476"
  uid: bc5afbf7-76eb-44ea-8860-43b7bef8fc11
spec:
  trustedCA:
    name: my-trusted-cm
status: {}

2. The cdiconfig is updated

    storageClass:
      csi-manila-ceph: "0.055"
      hostpath-csi: "0.055"
      hostpath-provisioner: "0.055"
      local-block-ocs: "0.055"
      nfs: "0.055"
      ocs-storagecluster-ceph-rbd: "0.055"
      standard: "0.055"
      standard-csi: "0.055"
  importProxy:
    HTTPProxy: ""
    HTTPSProxy: ""
    noProxy: ""
    trustedCAProxy: my-trusted-cm
  scratchSpaceStorageClass: hostpath-provisioner
  uploadProxyURL: cdi-uploadproxy-openshift-cnv.apps.stg10-kevin.cnv-qe.rhcloud.com

3. Created a dv and the import succeeds.


Moving to VERIFIED!

Comment 5 Kevin Alon Goldblatt 2022-02-22 17:17:43 UTC
Verified on 4.10, need to verify on 4.9.4. Awaiting D/S build

Comment 8 Kevin Alon Goldblatt 2022-03-09 12:04:44 UTC
Verified with the following code:
------------------------------------
oc version
Client Version: 4.10.0-202201310820.p0.g7c299f1.assembly.stream-7c299f1
Server Version: 4.9.23
Kubernetes Version: v1.22.3+b93fd35
[cnv-qe-jenkins@stg03-kevin-p82rf-executor kevin]$ oc get csv -n openshift-cnv
NAME                                      DISPLAY                    VERSION   REPLACES                                  PHASE
kubevirt-hyperconverged-operator.v4.9.4   OpenShift Virtualization   4.9.4     kubevirt-hyperconverged-operator.v4.9.3   Succeeded


Verified with the following scenario:
----------------------------------------
1. Created a config map
oc create cm my-trusted-cm
configmap/my-trusted-cm created

oc get cm my-trusted-cm -oyaml
apiVersion: v1
kind: ConfigMap
metadata:
  creationTimestamp: "2022-03-09T11:40:09Z"
  name: my-trusted-cm
  namespace: default
  resourceVersion: "886147"
  uid: 3f23a9cb-f4e3-465a-b817-03aa58c66156

2. Edited the cluster proxy

oc get proxies cluster -oyaml
apiVersion: config.openshift.io/v1
kind: Proxy
metadata:
  creationTimestamp: "2022-03-08T19:27:37Z"
  generation: 2
  name: cluster
  resourceVersion: "878789"
  uid: 83eb1dc2-1514-4ccf-ac46-c1f8f0c27130
spec:
  trustedCA:
    name: my-trusted-cm     <<<<<<<<<<<<
status: {}

3. The cdiconfig is updated
oc get cdiconfig -oyaml
apiVersion: v1
items:
- apiVersion: cdi.kubevirt.io/v1beta1
  kind: CDIConfig
  metadata:
    creationTimestamp: "2022-03-08T20:42:06Z"
    generation: 9
    labels:
      app: containerized-data-importer
      app.kubernetes.io/component: storage
      app.kubernetes.io/managed-by: cdi-controller
      app.kubernetes.io/part-of: hyperconverged-cluster
      app.kubernetes.io/version: v4.9.4
      cdi.kubevirt.io: ""
    name: config
    ownerReferences:
    - apiVersion: cdi.kubevirt.io/v1beta1
      blockOwnerDeletion: true
      controller: true
      kind: CDI
      name: cdi-kubevirt-hyperconverged
      uid: d699987b-e3e3-45b0-9b3f-850deaa495c9
    resourceVersion: "878792"
    uid: 811ae7ac-23a4-485f-81a1-b970844b834d
  spec:
    featureGates:
    - HonorWaitForFirstConsumer
  status:
    defaultPodResourceRequirements:
      limits:
        cpu: 750m
        memory: 600M
      requests:
        cpu: 100m
        memory: 60M
    filesystemOverhead:
      global: "0.055"
      storageClass:
        csi-manila-ceph: "0.055"
        hostpath-provisioner: "0.055"
        local-block: "0.055"
        nfs: "0.055"
        ocs-storagecluster-ceph-rbd: "0.055"
        standard: "0.055"
        standard-csi: "0.055"
    importProxy:
      HTTPProxy: ""
      HTTPSProxy: ""
      noProxy: ""
      trustedCAProxy: my-trusted-cm     <<<<<<<<<<<<<<<<<<<<
    scratchSpaceStorageClass: hostpath-provisioner
    uploadProxyURL: cdi-uploadproxy-openshift-cnv.apps.xxx.xxx.xxx.com
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""


4. Created a dv - dv imports successfully! 


Moving to VERIFIED

Comment 14 errata-xmlrpc 2022-04-26 16:54:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Virtualization 4.9.4 Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2022:1596


Note You need to log in before you can comment on or make changes to this bug.