Description of problem: In HCO logs we see something like: {"level":"info","ts":1574269878.4764912,"logger":"controller_hyperconverged","msg":"Reconciling HyperConverged operator","Request.Namespace":"openshift","Request.Name":"hyperconverged-cluster"} {"level":"info","ts":1574269878.4765778,"logger":"controller_hyperconverged","msg":"No HyperConverged resource","Request.Namespace":"openshift","Request.Name":"hyperconverged-cluster"} {"level":"info","ts":1574269878.4878688,"logger":"controller_hyperconverged","msg":"Reconciling HyperConverged operator","Request.Namespace":"openshift","Request.Name":"hyperconverged-cluster"} {"level":"info","ts":1574269878.4879391,"logger":"controller_hyperconverged","msg":"No HyperConverged resource","Request.Namespace":"openshift","Request.Name":"hyperconverged-cluster"} The issue is that all of our secondary watches (all the watches that aren't on the HyperConverged resource itself) are added to the queue for the owner. But the stuff that is cluster-wide or in different namespaces like 'openshift' vs 'openshift-cnv' -- the request that is added to the queue is in a different place than the HyperConverged resource. Version-Release number of selected component (if applicable): 2.2.0 How reproducible: 100% Steps to Reproduce: 1. deploy CNV 2. check HCO logs 3. Actual results: {"level":"info","ts":1574269878.4765778,"logger":"controller_hyperconverged","msg":"No HyperConverged resource","Request.Namespace":"openshift","Request.Name":"hyperconverged-cluster"} Expected results: Successful reconciliation Additional info:
Workaround: echo -e "spec:\n test: 1234" > test.yaml oc patch -n openshift KubevirtCommonTemplatesBundle common-templates-hyperconverged-cluster --type merge --patch "$(cat patch-file.yaml)" oc patch -n openshift-cnv KubevirtNodeLabellerBundles node-labeller-hyperconverged-cluster --type merge --patch "$(cat patch-file.yaml)"
We must not release cnv-2.2 without this fix, as it can cause data loss.
(In reply to Dan Kenigsberg from comment #2) > We must not release cnv-2.2 without this fix, as it can cause data loss. Sorry, this bug is related to the data loss bug 1786475, but on its own it is not a blocker. Typically, underlying CRs are unlikely to be modified on their own volition.
Deploy CNV2.2 on psi env. check HCO logs "level":"info","ts":1579044558.9341733,"logger":"controller_hyperconverged","msg":"Reconciling HyperConverged operator","Request.Namespace":"openshift-cnv","Request.Name":"hyperconverged-cluster"} {"level":"info","ts":1579044558.9342268,"logger":"controller_hyperconverged","msg":"KubeVirt config already exists","Request.Namespace":"openshift-cnv","Request.Name":"hyperconverged-cluster","KubeVirtConfig.Namespace":"openshift-cnv","KubeVirtConfig.Name":"kubevirt-config"} move to verified
update verfied version Client Version: 4.3.0-0.nightly-2020-01-14-043441 Server Version: 4.3.0-0.nightly-2020-01-14-043441 Kubernetes Version: v1.16.2
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:0307