Bug 2302257 - [RDR]odfmo-controller-manager is in CLBO
Summary: [RDR]odfmo-controller-manager is in CLBO
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.17
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.17.0
Assignee: umanga
QA Contact: Pratik Surve
URL:
Whiteboard:
Depends On:
Blocks: 2302463
TreeView+ depends on / blocked
 
Reported: 2024-08-01 14:45 UTC by Pratik Surve
Modified: 2024-10-30 14:29 UTC (History)
5 users (show)

Fixed In Version: 4.17.0-65
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 2302463 (view as bug list)
Environment:
Last Closed: 2024-10-30 14:29:40 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage odf-multicluster-orchestrator pull 229 0 None Merged Bug 2302257: console: use github.com/openshift/api/console/v1 2024-08-02 07:11:54 UTC
Red Hat Bugzilla 2297946 0 unspecified CLOSED update console API from v1alpha1 to v1 2024-08-19 07:43:04 UTC
Red Hat Issue Tracker OCSBZM-8791 0 None None None 2024-08-01 14:46:01 UTC
Red Hat Product Errata RHSA-2024:8676 0 None None None 2024-10-30 14:29:48 UTC

Description Pratik Surve 2024-08-01 14:45:29 UTC
Description of problem (please be detailed as possible and provide log
snippests):

[RDR]odfmo-controller-manager is in CLBO 

Version of all relevant components (if applicable):

OCP version:- 4.17.0-0.nightly-2024-07-31-035751
ODF version:- 4.17.0-57
CEPH version:- ceph version 19.1.0-0-g9025b9024ba (9025b9024baf597d63005552b5ee004013630404) squid (rc)
ACM version:- 2.12.0-25
SUBMARINER version:- v0.18.0

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

yes

Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
Yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.Deploy ODF on 4.17 OCP
2.
3.


Actual results:

0801 14:35:20.004505       1 leaderelection.go:260] successfully acquired lease openshift-operators/1d19c724.odf.openshift.io
{"level":"info","ts":1722522920.0047233,"caller":"controller/controller.go:178","msg":"Starting EventSource","controller":"mirrorpeer","controllerGroup":"multicluster.odf.openshift.io","controllerKind":"MirrorPeer","source":"kind source: *v1alpha1.MirrorPeer"}
{"level":"info","ts":1722522920.0048068,"caller":"controller/controller.go:186","msg":"Starting Controller","controller":"mirrorpeer","controllerGroup":"multicluster.odf.openshift.io","controllerKind":"MirrorPeer"}
{"level":"info","ts":1722522920.0047915,"caller":"controller/controller.go:178","msg":"Starting EventSource","controller":"managedcluster","controllerGroup":"cluster.open-cluster-management.io","controllerKind":"ManagedCluster","source":"kind source: *v1.ManagedCluster"}
{"level":"info","ts":1722522920.0048218,"caller":"controller/controller.go:178","msg":"Starting EventSource","controller":"secret","controllerGroup":"","controllerKind":"Secret","source":"kind source: *v1.Secret"}
{"level":"info","ts":1722522920.0048485,"caller":"controller/controller.go:178","msg":"Starting EventSource","controller":"managedclusterview","controllerGroup":"view.open-cluster-management.io","controllerKind":"ManagedClusterView","source":"kind source: *v1beta1.ManagedClusterView"}
{"level":"info","ts":1722522920.0048697,"caller":"controller/controller.go:178","msg":"Starting EventSource","controller":"secret","controllerGroup":"","controllerKind":"Secret","source":"kind source: *v1.ConfigMap"}
{"level":"info","ts":1722522920.0048552,"caller":"controller/controller.go:178","msg":"Starting EventSource","controller":"managedcluster","controllerGroup":"cluster.open-cluster-management.io","controllerKind":"ManagedCluster","source":"kind source: *v1beta1.ManagedClusterView"}
{"level":"info","ts":1722522920.0048852,"caller":"controller/controller.go:186","msg":"Starting Controller","controller":"secret","controllerGroup":"","controllerKind":"Secret"}
{"level":"info","ts":1722522920.0048795,"caller":"controller/controller.go:178","msg":"Starting EventSource","controller":"drpolicy","controllerGroup":"ramendr.openshift.io","controllerKind":"DRPolicy","source":"kind source: *v1alpha1.DRPolicy"}
{"level":"info","ts":1722522920.004879,"caller":"controller/controller.go:186","msg":"Starting Controller","controller":"managedclusterview","controllerGroup":"view.open-cluster-management.io","controllerKind":"ManagedClusterView"}
{"level":"info","ts":1722522920.0049095,"caller":"controller/controller.go:186","msg":"Starting Controller","controller":"drpolicy","controllerGroup":"ramendr.openshift.io","controllerKind":"DRPolicy"}
{"level":"info","ts":1722522920.0049021,"caller":"controller/controller.go:178","msg":"Starting EventSource","controller":"managedcluster","controllerGroup":"cluster.open-cluster-management.io","controllerKind":"ManagedCluster","source":"kind source: *v1.ConfigMap"}
{"level":"info","ts":1722522920.0049236,"caller":"controller/controller.go:186","msg":"Starting Controller","controller":"managedcluster","controllerGroup":"cluster.open-cluster-management.io","controllerKind":"ManagedCluster"}
W0801 14:35:21.497894       1 reflector.go:539] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: failed to list *v1alpha1.ConsolePlugin: conversion webhook for console.openshift.io/v1, Kind=ConsolePlugin failed: Post "https://webhook.openshift-console-operator.svc:9443/crdconvert?timeout=30s": service "webhook" not found
E0801 14:35:21.497930       1 reflector.go:147] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: Failed to watch *v1alpha1.ConsolePlugin: failed to list *v1alpha1.ConsolePlugin: conversion webhook for console.openshift.io/v1, Kind=ConsolePlugin failed: Post "https://webhook.openshift-console-operator.svc:9443/crdconvert?timeout=30s": service "webhook" not found
W0801 14:35:22.407853       1 reflector.go:539] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: failed to list *v1alpha1.ConsolePlugin: conversion webhook for console.openshift.io/v1, Kind=ConsolePlugin failed: Post "https://webhook.openshift-console-operator.svc:9443/crdconvert?timeout=30s": service "webhook" not found
E0801 14:35:22.407889       1 reflector.go:147] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: Failed to watch *v1alpha1.ConsolePlugin: failed to list *v1alpha1.ConsolePlugin: conversion webhook for console.openshift.io/v1, Kind=ConsolePlugin failed: Post "https://webhook.openshift-console-operator.svc:9443/crdconvert?timeout=30s": service "webhook" not found
{"level":"info","ts":1722522923.6998973,"caller":"controller/controller.go:220","msg":"Starting workers","controller":"drpolicy","controllerGroup":"ramendr.openshift.io","controllerKind":"DRPolicy","worker count":1}
{"level":"info","ts":1722522923.7010078,"caller":"controller/controller.go:220","msg":"Starting workers","controller":"managedclusterview","controllerGroup":"view.open-cluster-management.io","controllerKind":"ManagedClusterView","worker count":1}
{"level":"info","ts":1722522923.7940545,"caller":"controller/controller.go:220","msg":"Starting workers","controller":"mirrorpeer","controllerGroup":"multicluster.odf.openshift.io","controllerKind":"MirrorPeer","worker count":1}
{"level":"info","ts":1722522923.7943864,"msg":"Generated reconcile requests from internal secrets","controller":"MirrorPeerSecretReconciler","NumberOfRequests":0}
W0801 14:35:25.161520       1 reflector.go:539] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: failed to list *v1alpha1.ConsolePlugin: conversion webhook for console.openshift.io/v1, Kind=ConsolePlugin failed: Post "https://webhook.openshift-console-operator.svc:9443/crdconvert?timeout=30s": service "webhook" not found
E0801 14:35:25.161574       1 reflector.go:147] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: Failed to watch *v1alpha1.ConsolePlugin: failed to list *v1alpha1.ConsolePlugin: conversion webhook for console.openshift.io/v1, Kind=ConsolePlugin failed: Post "https://webhook.openshift-console-operator.svc:9443/crdconvert?timeout=30s": service "webhook" not found
W0801 14:35:30.004662       1 reflector.go:539] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: failed to list *v1alpha1.ConsolePlugin: conversion webhook for console.openshift.io/v1, Kind=ConsolePlugin failed: Post "https://webhook.openshift-console-operator.svc:9443/crdconvert?timeout=30s": service "webhook" not found
E0801 14:35:30.004692       1 reflector.go:147] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: Failed to watch *v1alpha1.ConsolePlugin: failed to list *v1alpha1.ConsolePlugin: conversion webhook for console.openshift.io/v1, Kind=ConsolePlugin failed: Post "https://webhook.openshift-console-operator.svc:9443/crdconvert?timeout=30s": service "webhook" not found
W0801 14:35:37.414072       1 reflector.go:539] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: failed to list *v1alpha1.ConsolePlugin: conversion webhook for console.openshift.io/v1, Kind=ConsolePlugin failed: Post "https://webhook.openshift-console-operator.svc:9443/crdconvert?timeout=30s": service "webhook" not found
E0801 14:35:37.414104       1 reflector.go:147] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: Failed to watch *v1alpha1.ConsolePlugin: failed to list *v1alpha1.ConsolePlugin: conversion webhook for console.openshift.io/v1, Kind=ConsolePlugin failed: Post "https://webhook.openshift-console-operator.svc:9443/crdconvert?timeout=30s": service "webhook" not found
W0801 14:35:58.509969       1 reflector.go:539] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: failed to list *v1alpha1.ConsolePlugin: conversion webhook for console.openshift.io/v1, Kind=ConsolePlugin failed: Post "https://webhook.openshift-console-operator.svc:9443/crdconvert?timeout=30s": service "webhook" not found
E0801 14:35:58.510000       1 reflector.go:147] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: Failed to watch *v1alpha1.ConsolePlugin: failed to list *v1alpha1.ConsolePlugin: conversion webhook for console.openshift.io/v1, Kind=ConsolePlugin failed: Post "https://webhook.openshift-console-operator.svc:9443/crdconvert?timeout=30s": service "webhook" not found
W0801 14:36:48.311376       1 reflector.go:539] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: failed to list *v1alpha1.ConsolePlugin: conversion webhook for console.openshift.io/v1, Kind=ConsolePlugin failed: Post "https://webhook.openshift-console-operator.svc:9443/crdconvert?timeout=30s": service "webhook" not found
E0801 14:36:48.311411       1 reflector.go:147] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: Failed to watch *v1alpha1.ConsolePlugin: failed to list *v1alpha1.ConsolePlugin: conversion webhook for console.openshift.io/v1, Kind=ConsolePlugin failed: Post "https://webhook.openshift-console-operator.svc:9443/crdconvert?timeout=30s": service "webhook" not found
{"level":"error","ts":1722523040.0058413,"caller":"controller/controller.go:203","msg":"Could not wait for Cache to sync","controller":"secret","controllerGroup":"","controllerKind":"Secret","error":"failed to wait for secret caches to sync: timed out waiting for cache to be synced for Kind *v1.Secret","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.1\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.2/pkg/internal/controller/controller.go:203\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.2/pkg/internal/controller/controller.go:208\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.2/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.2/pkg/manager/runnable_group.go:223"}
{"level":"info","ts":1722523040.0059292,"caller":"manager/internal.go:516","msg":"Stopping and waiting for non leader election runnables"}
{"level":"info","ts":1722523040.0059426,"caller":"manager/internal.go:520","msg":"Stopping and waiting for leader election runnables"}
{"level":"info","ts":1722523040.0059702,"caller":"controller/controller.go:220","msg":"Starting workers","controller":"managedcluster","controllerGroup":"cluster.open-cluster-management.io","controllerKind":"ManagedCluster","worker count":1}
{"level":"info","ts":1722523040.0059786,"caller":"controller/controller.go:240","msg":"Shutdown signal received, waiting for all workers to finish","controller":"managedcluster","controllerGroup":"cluster.open-cluster-management.io","controllerKind":"ManagedCluster"}
{"level":"info","ts":1722523040.0059564,"caller":"controller/controller.go:240","msg":"Shutdown signal received, waiting for all workers to finish","controller":"drpolicy","controllerGroup":"ramendr.openshift.io","controllerKind":"DRPolicy"}
{"level":"info","ts":1722523040.005993,"caller":"controller/controller.go:242","msg":"All workers finished","controller":"drpolicy","controllerGroup":"ramendr.openshift.io","controllerKind":"DRPolicy"}
{"level":"info","ts":1722523040.0059853,"caller":"controller/controller.go:242","msg":"All workers finished","controller":"managedcluster","controllerGroup":"cluster.open-cluster-management.io","controllerKind":"ManagedCluster"}
{"level":"info","ts":1722523040.0060172,"caller":"controller/controller.go:240","msg":"Shutdown signal received, waiting for all workers to finish","controller":"mirrorpeer","controllerGroup":"multicluster.odf.openshift.io","controllerKind":"MirrorPeer"}
{"level":"info","ts":1722523040.0060265,"caller":"controller/controller.go:242","msg":"All workers finished","controller":"mirrorpeer","controllerGroup":"multicluster.odf.openshift.io","controllerKind":"MirrorPeer"}
{"level":"error","ts":1722523040.0060267,"msg":"Failed to initialize multicluster console to manager","error":"Timeout: failed waiting for *v1alpha1.ConsolePlugin Informer to sync"}
{"level":"info","ts":1722523040.006019,"caller":"controller/controller.go:240","msg":"Shutdown signal received, waiting for all workers to finish","controller":"managedclusterview","controllerGroup":"view.open-cluster-management.io","controllerKind":"ManagedClusterView"}
{"level":"error","ts":1722523040.00604,"caller":"manager/internal.go:490","msg":"error received after stop sequence was engaged","error":"Timeout: failed waiting for *v1alpha1.ConsolePlugin Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.2/pkg/manager/internal.go:490"}
{"level":"info","ts":1722523040.0060525,"caller":"controller/controller.go:242","msg":"All workers finished","controller":"managedclusterview","controllerGroup":"view.open-cluster-management.io","controllerKind":"ManagedClusterView"}
{"level":"info","ts":1722523040.0060678,"caller":"manager/internal.go:526","msg":"Stopping and waiting for caches"}
W0801 14:37:20.006316       1 reflector.go:462] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: watch of *v1beta1.ManagedClusterView ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding
W0801 14:37:20.006347       1 reflector.go:462] pkg/mod/k8s.io/client-go.5/tools/cache/reflector.go:229: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding
{"level":"info","ts":1722523040.006408,"caller":"manager/internal.go:530","msg":"Stopping and waiting for webhooks"}
{"level":"info","ts":1722523040.0064957,"logger":"controller-runtime.webhook","caller":"webhook/server.go:249","msg":"Shutting down webhook server with timeout of 1 minute"}
{"level":"info","ts":1722523040.0065856,"caller":"manager/internal.go:533","msg":"Stopping and waiting for HTTP servers"}
{"level":"info","ts":1722523040.00661,"caller":"manager/server.go:43","msg":"shutting down server","kind":"health probe","addr":"[::]:8081"}
{"level":"info","ts":1722523040.0066104,"logger":"controller-runtime.metrics","caller":"server/server.go:231","msg":"Shutting down metrics server with timeout of 1 minute"}
{"level":"info","ts":1722523040.0066612,"caller":"manager/internal.go:537","msg":"Wait completed, proceeding to shutdown the manager"}
{"level":"error","ts":1722523040.0067115,"msg":"Received an error while waiting. exiting..","error":"failed to wait for secret caches to sync: timed out waiting for cache to be synced for Kind *v1.Secret"}

Expected results:

pod should be in running state

Additional info:

Comment 7 errata-xmlrpc 2024-10-30 14:29:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.17.0 Security, Enhancement, & Bug Fix Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:8676


Note You need to log in before you can comment on or make changes to this bug.