Description of problem: It is possible to add a custom DataImportCron to HCO with the same name as CNV-provided DataImportCron. This causes SSP to have 2 DataImportCron with the same name; the custom one being added last and thus takes precedence over the one provided by CNV. Version-Release number of selected component (if applicable): CNV 4.10.0 How reproducible: Steps to Reproduce: 1. Update HCO, add a custom DataImportCron (its name should be identical to one of those created by HCO as part of auto-update boot sources, for example centos8-image-cron) Actual results: SSP has 2 DataImportCrons named centos8-image-cron Expected results: Adding a DataImportCron with the same name as provided by CNV should be blocked. Additional info: ====================== HCO Update ===================================== metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: 'true' name: centos8-image-cron spec: managedDataSource: custom-data-source schedule: '* * * * *' template: spec: source: registry: url: docker://quay.io/kubevirt/fedora-cloud-container-disk-demo:latest storage: resources: requests: storage: 10Gi ===================== HCO CR ====================================== $ oc get hco -n openshift-cnv -oyaml apiVersion: v1 items: - apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: creationTimestamp: "2022-01-16T18:19:28Z" finalizers: - kubevirt.io/hyperconverged generation: 37 labels: app: kubevirt-hyperconverged name: kubevirt-hyperconverged namespace: openshift-cnv resourceVersion: "2283179" uid: 279b1835-df46-417a-aac4-1fbdfae2d242 spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s server: duration: 24h0m0s renewBefore: 12h0m0s dataImportCronTemplates: - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: centos8-image-cron spec: managedDataSource: custom-data-source schedule: '* * * * *' template: spec: source: registry: url: docker://quay.io/kubevirt/fedora-cloud-container-disk-demo:latest storage: resources: requests: storage: 10Gi featureGates: enableCommonBootImageImport: true sriovLiveMigration: true withHostPassthroughCPU: false infra: {} liveMigrationConfig: completionTimeoutPerGiB: 800 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 uninstallStrategy: BlockUninstallIfWorkloadsExist workloadUpdateStrategy: batchEvictionInterval: 1m0s batchEvictionSize: 10 workloadUpdateMethods: - LiveMigrate workloads: {} status: conditions: - lastTransitionTime: "2022-01-17T11:27:53Z" message: Reconcile completed successfully observedGeneration: 37 reason: ReconcileCompleted status: "True" type: ReconcileComplete - lastTransitionTime: "2022-01-17T15:18:47Z" message: 'SSP is not available: Reconciling SSP resources' observedGeneration: 37 reason: SSPNotAvailable status: "False" type: Available - lastTransitionTime: "2022-01-17T15:18:47Z" message: 'SSP is progressing: Reconciling SSP resources' observedGeneration: 37 reason: SSPProgressing status: "True" type: Progressing - lastTransitionTime: "2022-01-17T15:18:47Z" message: 'SSP is degraded: Reconciling SSP resources' observedGeneration: 37 reason: SSPDegraded status: "True" type: Degraded - lastTransitionTime: "2022-01-17T15:18:47Z" message: 'SSP is progressing: Reconciling SSP resources' observedGeneration: 37 reason: SSPProgressing status: "False" type: Upgradeable dataImportSchedule: 7 6/12 * * * observedGeneration: 37 relatedObjects: - apiVersion: scheduling.k8s.io/v1 kind: PriorityClass name: kubevirt-cluster-critical resourceVersion: "64680" uid: 8e150412-3e3d-4d22-a00a-92bb0dfa009d - apiVersion: kubevirt.io/v1 kind: KubeVirt name: kubevirt-kubevirt-hyperconverged namespace: openshift-cnv resourceVersion: "96805" uid: 77a29a45-e5c5-4991-b98f-d908d69edc4f - apiVersion: cdi.kubevirt.io/v1beta1 kind: CDI name: cdi-kubevirt-hyperconverged resourceVersion: "73376" uid: 64b0cf4f-0719-4684-a657-812bedbaea15 - apiVersion: v1 kind: ConfigMap name: kubevirt-storage-class-defaults namespace: openshift-cnv resourceVersion: "65718" uid: 0f0e0e67-ad7f-45b0-9da9-82d481a8d258 - apiVersion: networkaddonsoperator.network.kubevirt.io/v1 kind: NetworkAddonsConfig name: cluster resourceVersion: "2282549" uid: de678c92-9088-4230-8f95-d682138eb36b - apiVersion: ssp.kubevirt.io/v1beta1 kind: SSP name: ssp-kubevirt-hyperconverged namespace: openshift-cnv resourceVersion: "2283175" uid: d184b0e2-959d-431c-a713-12430411ec62 - apiVersion: v1 kind: Service name: kubevirt-hyperconverged-operator-metrics namespace: openshift-cnv resourceVersion: "65740" uid: 4d1d7721-dc8f-4ca8-b1c2-809090a7c84c - apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor name: kubevirt-hyperconverged-operator-metrics namespace: openshift-cnv resourceVersion: "65743" uid: 8adc43c1-85f8-45af-bb70-6a44e08d4249 - apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule name: kubevirt-hyperconverged-prometheus-rule namespace: openshift-cnv resourceVersion: "65748" uid: 4723faed-bcb9-438a-bdc6-8ce74b0ca2b0 - apiVersion: console.openshift.io/v1 kind: ConsoleCLIDownload name: virtctl-clidownloads-kubevirt-hyperconverged resourceVersion: "65751" uid: 7cdfcfaa-1628-4234-8840-809a345937e4 - apiVersion: route.openshift.io/v1 kind: Route name: hyperconverged-cluster-cli-download namespace: openshift-cnv resourceVersion: "65766" uid: 829ae7ac-2b6b-4b18-b76a-1b436e797359 - apiVersion: v1 kind: Service name: hyperconverged-cluster-cli-download namespace: openshift-cnv resourceVersion: "65764" uid: ff27d24a-c9a4-45f9-89a9-c084e47e319f - apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart name: connect-ext-net-to-vm resourceVersion: "65772" uid: 58dcd724-cac2-4991-aff1-771a522026a4 - apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart name: create-win10-vm resourceVersion: "65773" uid: cc1413fc-0421-433e-9f29-34215fee9983 - apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart name: create-rhel-vm resourceVersion: "65776" uid: d35e35dc-7552-4e4a-b8bd-7349490906e6 - apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart name: customize-a-boot-source resourceVersion: "65777" uid: 8505f77b-e35f-47f7-9a7c-250ba03742c0 - apiVersion: v1 kind: ConfigMap name: grafana-dashboard-kubevirt-top-consumers namespace: openshift-config-managed resourceVersion: "65780" uid: 6cefb3dc-e95e-41ad-ae7e-e90db061086f - apiVersion: image.openshift.io/v1 kind: ImageStream name: rhel8-guest namespace: openshift-virtualization-os-images resourceVersion: "67092" uid: 7bbb4bfa-2d90-4181-b068-17d71d3e8848 - apiVersion: image.openshift.io/v1 kind: ImageStream name: rhel9-guest namespace: openshift-virtualization-os-images resourceVersion: "67090" uid: 89f91448-a3f1-4714-9251-b9b30dbdcbcf - apiVersion: v1 kind: ConfigMap name: virtio-win namespace: openshift-cnv resourceVersion: "67039" uid: 13841d8b-6cf9-428d-a1d0-e86d785ea757 - apiVersion: rbac.authorization.k8s.io/v1 kind: Role name: virtio-win namespace: openshift-cnv resourceVersion: "67045" uid: 5ff8c7cd-8e4f-4cc8-aa20-29b6fefc8ed5 - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding name: virtio-win namespace: openshift-cnv resourceVersion: "67046" uid: 18d58541-410c-47b3-ac73-6906a401b438 - apiVersion: rbac.authorization.k8s.io/v1 kind: Role name: hco.kubevirt.io:config-reader namespace: openshift-cnv resourceVersion: "67055" uid: 83f0fe97-4423-44a8-8932-a66e0f58d1ec - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding name: hco.kubevirt.io:config-reader namespace: openshift-cnv resourceVersion: "67058" uid: b88d4bb3-906d-426f-8b66-d2178c3a14b4 versions: - name: operator version: 4.10.0 kind: List metadata: resourceVersion: "" selfLink: "" ====================== SSP CR ===================================== $ oc get ssp -n openshift-cnv -oyaml apiVersion: v1 items: - apiVersion: ssp.kubevirt.io/v1beta1 kind: SSP metadata: creationTimestamp: "2022-01-16T18:19:28Z" finalizers: - ssp.kubevirt.io/finalizer generation: 36 labels: app: kubevirt-hyperconverged app.kubernetes.io/component: schedule app.kubernetes.io/managed-by: hco-operator app.kubernetes.io/part-of: hyperconverged-cluster app.kubernetes.io/version: 4.10.0 name: ssp-kubevirt-hyperconverged namespace: openshift-cnv resourceVersion: "2283929" uid: d184b0e2-959d-431c-a713-12430411ec62 spec: commonTemplates: dataImportCronTemplates: - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: rhel8-image-cron spec: garbageCollect: Outdated managedDataSource: rhel8 schedule: 7 6/12 * * * template: metadata: {} spec: source: registry: imageStream: rhel8-guest pullMethod: node storage: resources: requests: storage: 10Gi status: {} - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: rhel9-image-cron spec: garbageCollect: Outdated managedDataSource: rhel9 schedule: 7 6/12 * * * template: metadata: {} spec: source: registry: imageStream: rhel9-guest pullMethod: node storage: resources: requests: storage: 10Gi status: {} - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: centos8-image-cron spec: garbageCollect: Outdated managedDataSource: centos8 schedule: 7 6/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos:8.4 storage: resources: requests: storage: 10Gi status: {} - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: fedora-image-cron spec: garbageCollect: Outdated managedDataSource: fedora schedule: 7 6/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/fedora:35 storage: resources: requests: storage: 5Gi status: {} - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: centos8-image-cron spec: managedDataSource: custom-data-source schedule: '* * * * *' template: metadata: {} spec: source: registry: url: docker://quay.io/kubevirt/fedora-cloud-container-disk-demo:latest storage: resources: requests: storage: 10Gi status: {} namespace: openshift nodeLabeller: {} templateValidator: replicas: 2 status: conditions: - lastHeartbeatTime: "2022-01-17T15:19:13Z" lastTransitionTime: "2022-01-17T15:19:13Z" message: Reconciling SSP resources reason: available status: "False" type: Available - lastHeartbeatTime: "2022-01-17T15:19:13Z" lastTransitionTime: "2022-01-17T15:19:13Z" message: Reconciling SSP resources reason: progressing status: "True" type: Progressing - lastHeartbeatTime: "2022-01-17T15:19:13Z" lastTransitionTime: "2022-01-17T15:19:13Z" message: Reconciling SSP resources reason: degraded status: "True" type: Degraded observedGeneration: 36 observedVersion: 4.10.0 operatorVersion: 4.10.0 phase: Deploying targetVersion: 4.10.0 kind: List metadata: resourceVersion: "" selfLink: "" ======================== DataImportCron =================================== $ oc get dic centos8-image-cron -n openshift-virtualization-os-images -oyaml apiVersion: cdi.kubevirt.io/v1beta1 kind: DataImportCron metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" cdi.kubevirt.io/storage.import.sourceDesiredDigest: sha256:4a0c3f9526551d0294079f1b0171a071a57fe0bf60a2e8529bf4102ee63a67cd operator-sdk/primary-resource: openshift-cnv/ssp-kubevirt-hyperconverged operator-sdk/primary-resource-type: SSP.ssp.kubevirt.io creationTimestamp: "2022-01-17T14:35:46Z" finalizers: - cdi.kubevirt.io/dataImportCronFinalizer generation: 35 labels: app.kubernetes.io/component: templating app.kubernetes.io/managed-by: ssp-operator app.kubernetes.io/name: data-sources app.kubernetes.io/part-of: hyperconverged-cluster app.kubernetes.io/version: 4.10.0 name: centos8-image-cron namespace: openshift-virtualization-os-images resourceVersion: "2281987" uid: 4f78950f-e9ee-4a61-99ec-90342b9999b7 spec: managedDataSource: custom-data-source schedule: '* * * * *' template: metadata: {} spec: source: registry: url: docker://quay.io/kubevirt/fedora-cloud-container-disk-demo:latest storage: resources: requests: storage: 10Gi status: {} status: conditions: - lastHeartbeatTime: "2022-01-17T14:35:46Z" lastTransitionTime: "2022-01-17T14:35:46Z" message: Import is progressing reason: ImportProgressing status: "True" type: Progressing - lastHeartbeatTime: "2022-01-17T14:35:46Z" lastTransitionTime: "2022-01-17T14:35:46Z" message: Import is progressing reason: ImportProgressing status: "False" type: UpToDate currentImports: - DataVolumeName: custom-data-source-4a0c3f952655 Digest: sha256:4a0c3f9526551d0294079f1b0171a071a57fe0bf60a2e8529bf4102ee63a67cd lastExecutionTimestamp: "2022-01-17T15:18:06Z"
@nunnatsa , I attempted to verify the bug in two different ways with a PSI cluster running 4.10.0-674 (stage). Scenario A ---------- * Prerequisites: featuregate "enableCommonBootImageImport" is set to true in HCO CR. 1. oc edit -n openshift-cnv hco 2. paste the dataImportCrontTemplate from the bug description under HCO CR spec: dataImportCronTemplates: - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: 'true' name: centos8-image-cron spec: managedDataSource: custom-data-source schedule: '* * * * *' template: spec: source: registry: url: docker://quay.io/kubevirt/fedora-cloud-container-disk-demo:latest storage: resources: requests: storage: 10Gi 3. Verified that HCO CR was updated accordingly. 4. oc get dic centos8-image-cron -n openshift-virtualization-os-images -oyaml >>>the managedDataSource was custom-data-source Scenario B ---------- * Prerequisites: featuregate "enableCommonBootImageImport" is set to false in HCO CR. 1. Repeat steps 1-3 from scenarion A. 2. Set the featuregate "enableCommonBootImageImport" to true in HCO CR. 3. Repeat step 4 from scenario A >>>same results [cnv-qe-jenkins@c01-issac-410-kldng-executor ~]$ oc get clusterversions.config.openshift.io NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.0-rc.2 True False 13h Cluster version is 4.10.0-rc.2 [cnv-qe-jenkins@c01-issac-410-kldng-executor ~]$ oc get csv -A NAMESPACE NAME DISPLAY VERSION REPLACES PHASE openshift-cnv kubevirt-hyperconverged-operator.v4.10.0 OpenShift Virtualization 4.10.0 kubevirt-hyperconverged-operator.v4.9.2 Succeeded openshift-local-storage local-storage-operator.4.9.0-202201270226 Local Storage 4.9.0-202201270226 Succeeded openshift-operator-lifecycle-manager packageserver Package Server 0.19.0 Succeeded openshift-storage mcg-operator.v4.10.0 NooBaa Operator 4.10.0 Succeeded openshift-storage ocs-operator.v4.10.0 OpenShift Container Storage 4.10.0 Succeeded openshift-storage odf-operator.v4.10.0 OpenShift Data Foundation 4.10.0 Succeeded [cnv-qe-jenkins@c01-issac-410-kldng-executor ~]$ Moving the bug status back to ASSIGNED.
Please disregard my previous comment (https://bugzilla.redhat.com/show_bug.cgi?id=2041519#c1). The verification was incorrect. I verified the two scenarios I mentioned on the same cluster (4.10 stage == 4.10.0-674), this time with a name that collides with the reserved/predefined DICs and each time I got the webhook validation error: error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: rhel9-image-cron DataImportCronTable is already defined You can run `oc replace -f /tmp/oc-edit-850654539.yaml` to try this update again. Moving to VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 4.10.0 Images security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0947