Bug 1902046 - Not possible to edit CDIConfig (through CDI CR / CDIConfig)
Summary: Not possible to edit CDIConfig (through CDI CR / CDIConfig)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 2.6.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 2.6.0
Assignee: Bartosz Rybacki
QA Contact: Alex Kalenyuk
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-11-26 16:24 UTC by Alex Kalenyuk
Modified: 2021-03-10 11:20 UTC (History)
3 users (show)

Fixed In Version: hco-v2.6.0-52
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-03-10 11:19:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt hyperconverged-cluster-operator pull 995 0 None closed Excludes CDI.Spec.Config from HCO Reconciliation Loop. 2021-02-16 13:27:41 UTC
Red Hat Product Errata RHSA-2021:0799 0 None None None 2021-03-10 11:20:57 UTC

Description Alex Kalenyuk 2020-11-26 16:24:22 UTC
Description of problem:
CDIConfig was recently moved to the CDI CR, but in d/s we cant change fields in the CDI CR due to HCO reconciliation

Version-Release number of selected component (if applicable):
2.6.0

How reproducible:
100%

Steps to Reproduce:
1. Edit CDI Config in CDI CR (spec.config)

Actual results:
Changes reverted

Expected results:
Changes saved

Additional info:
- CDIConfig was recently moved to the CDI CR
- It was recently decided that HCO will not expose new APIs regarding strict reconciliation until 2.7

[cnv-qe-jenkins@akalenyu-26245-p97jm-executor cnv-tests]$ oc edit cdi cdi-kubevirt-hyperconverged
apiVersion: cdi.kubevirt.io/v1beta1
kind: CDI
metadata:
  creationTimestamp: "2020-11-22T11:58:10Z"
  finalizers:
  - operator.cdi.kubevirt.io
  generation: 17
  labels:
    app: kubevirt-hyperconverged
  name: cdi-kubevirt-hyperconverged
  resourceVersion: "3328250"
  selfLink: /apis/cdi.kubevirt.io/v1beta1/cdis/cdi-kubevirt-hyperconverged
  uid: 29cc4643-7981-4d84-be29-116f86009033
spec:
  config:
    featureGates:
    - HonorWaitForFirstConsumer
  infra: {}
  uninstallStrategy: BlockUninstallIfWorkloadsExist
  workload: {}
status:
  conditions:
  - lastHeartbeatTime: "2020-11-22T11:59:44Z"
    lastTransitionTime: "2020-11-22T11:59:44Z"
    message: Deployment Completed
    reason: DeployCompleted
    status: "True"
    type: Available
  - lastHeartbeatTime: "2020-11-22T11:59:44Z"
    lastTransitionTime: "2020-11-22T11:59:44Z"
    status: "False"
    type: Progressing
  - lastHeartbeatTime: "2020-11-25T09:53:26Z"
    lastTransitionTime: "2020-11-25T09:53:26Z"
    status: "False"
    type: Degraded
  observedVersion: v2.6.0
  operatorVersion: v2.6.0
  phase: Deployed
  targetVersion: v2.6.0
cdi.cdi.kubevirt.io/cdi-kubevirt-hyperconverged edited

[cnv-qe-jenkins@akalenyu-26245-p97jm-executor cnv-tests]$ oc get cdi cdi-kubevirt-hyperconverged -oyaml
apiVersion: cdi.kubevirt.io/v1beta1
kind: CDI
metadata:
  creationTimestamp: "2020-11-22T11:58:10Z"
  finalizers:
  - operator.cdi.kubevirt.io
  generation: 19
  labels:
    app: kubevirt-hyperconverged
  managedFields:
  - apiVersion: cdi.kubevirt.io/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:app: {}
      f:spec:
        .: {}
        f:infra: {}
        f:uninstallStrategy: {}
        f:workload: {}
      f:status: {}
    manager: hyperconverged-cluster-operator
    operation: Update
    time: "2020-11-22T11:58:10Z"
  - apiVersion: cdi.kubevirt.io/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers: {}
      f:status:
        f:conditions: {}
        f:observedVersion: {}
        f:operatorVersion: {}
        f:phase: {}
        f:targetVersion: {}
    manager: virt-cdi-operator
    operation: Update
    time: "2020-11-25T09:53:26Z"
  name: cdi-kubevirt-hyperconverged
  resourceVersion: "4751903"
  selfLink: /apis/cdi.kubevirt.io/v1beta1/cdis/cdi-kubevirt-hyperconverged
  uid: 29cc4643-7981-4d84-be29-116f86009033
spec:
  infra: {}
  uninstallStrategy: BlockUninstallIfWorkloadsExist
  workload: {}
status:
  conditions:
  - lastHeartbeatTime: "2020-11-22T11:59:44Z"
    lastTransitionTime: "2020-11-22T11:59:44Z"
    message: Deployment Completed
    reason: DeployCompleted
    status: "True"
    type: Available
  - lastHeartbeatTime: "2020-11-22T11:59:44Z"
    lastTransitionTime: "2020-11-22T11:59:44Z"
    status: "False"
    type: Progressing
  - lastHeartbeatTime: "2020-11-25T09:53:26Z"
    lastTransitionTime: "2020-11-25T09:53:26Z"
    status: "False"
    type: Degraded
  observedVersion: v2.6.0
  operatorVersion: v2.6.0
  phase: Deployed
  targetVersion: v2.6.0

[cnv-qe-jenkins@akalenyu-26245-p97jm-executor cnv-tests]$ oc get cdiconfig config -oyaml
apiVersion: cdi.kubevirt.io/v1beta1
kind: CDIConfig
metadata:
  creationTimestamp: "2020-11-22T11:59:42Z"
  generation: 75
  labels:
    app: containerized-data-importer
    cdi.kubevirt.io: ""
  managedFields:
  - apiVersion: cdi.kubevirt.io/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:app: {}
          f:cdi.kubevirt.io: {}
        f:ownerReferences: {}
      f:spec: {}
      f:status:
        .: {}
        f:defaultPodResourceRequirements:
          .: {}
          f:limits:
            .: {}
            f:cpu: {}
            f:memory: {}
          f:requests:
            .: {}
            f:cpu: {}
            f:memory: {}
        f:filesystemOverhead:
          .: {}
          f:global: {}
          f:storageClass:
            .: {}
            f:csi-manila-ceph: {}
            f:hostpath-provisioner: {}
            f:local-block: {}
            f:nfs: {}
            f:ocs-storagecluster-ceph-rbd: {}
            f:ocs-storagecluster-ceph-rgw: {}
            f:ocs-storagecluster-cephfs: {}
            f:openshift-storage.noobaa.io: {}
            f:standard: {}
        f:scratchSpaceStorageClass: {}
        f:uploadProxyURL: {}
    manager: virt-cdi-controller
    operation: Update
    time: "2020-11-26T16:20:02Z"
  name: config
  ownerReferences:
  - apiVersion: cdi.kubevirt.io/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: CDI
    name: cdi-kubevirt-hyperconverged
    uid: 29cc4643-7981-4d84-be29-116f86009033
  resourceVersion: "4751904"
  selfLink: /apis/cdi.kubevirt.io/v1beta1/cdiconfigs/config
  uid: 2eab0139-24a6-4a7a-b115-be4aaea14510
spec: {}
status:
  defaultPodResourceRequirements:
    limits:
      cpu: "0"
      memory: "0"
    requests:
      cpu: "0"
      memory: "0"
  filesystemOverhead:
    global: "0.055"
    storageClass:
      csi-manila-ceph: "0.055"
      hostpath-provisioner: "0.055"
      local-block: "0.055"
      nfs: "0.055"
      ocs-storagecluster-ceph-rbd: "0.055"
      ocs-storagecluster-ceph-rgw: "0.055"
      ocs-storagecluster-cephfs: "0.055"
      openshift-storage.noobaa.io: "0.055"
      standard: "0.055"
  scratchSpaceStorageClass: standard
  uploadProxyURL: cdi-uploadproxy-openshift-cnv.apps.akalenyu-26245.cnv-qe.rhcloud.com


(
Tried to add:
config:
    featureGates:
    - HonorWaitForFirstConsumer
)

Comment 1 Alex Kalenyuk 2021-01-12 12:14:58 UTC
Verified on CNV 2.6.0, CDI: Containerized Data Importer v1.27.0
HCO-v2.6.0-384
HCO image: registry.redhat.io/container-native-virtualization/hyperconverged-cluster-operator@sha256:8f226ed8c7cac0246be38c3304320ca8281bf03a09d1f4846973778771255b24
CSV creation time: 2020-12-20 09:44:30
hyperconverged-cluster-operator v2.6.0-55

Comment 4 errata-xmlrpc 2021-03-10 11:19:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Virtualization 2.6.0 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0799


Note You need to log in before you can comment on or make changes to this bug.