Bug 2276353 - [RDR] [Discovered Apps] recipe-controller-manager pod in CrashLoopBackOff state
Summary: [RDR] [Discovered Apps] recipe-controller-manager pod in CrashLoopBackOff state
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.16
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.16.0
Assignee: Raghavendra Talur
QA Contact: Sidhant Agrawal
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-04-22 07:51 UTC by Sidhant Agrawal
Modified: 2024-07-17 13:20 UTC (History)
4 users (show)

Fixed In Version: 4.16.0-92
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-07-17 13:20:11 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github RamenDR recipe pull 25 0 None open Create a make target to package only the CRD in the recipe bundle 2024-04-30 07:27:49 UTC
Github red-hat-storage recipe pull 5 0 None open Bug 2276353: Backport commits from main branch to release-4.16 2024-05-01 16:07:11 UTC
Github red-hat-storage recipe pull 7 0 None open Bug 2276353: Add downstream metadata 2024-05-01 20:50:52 UTC
Red Hat Product Errata RHSA-2024:4591 0 None None None 2024-07-17 13:20:12 UTC

Description Sidhant Agrawal 2024-04-22 07:51:42 UTC
Description of problem (please be detailed as possible and provide log
snippests):
In a RDR setup, attempted to install the Recipe operator on managed clusters as a potential workaround for bug 2276344. The installation failed with the recipe-controller-manager pod stuck in a CrashLoopBackOff state.


$ oc get pod -n openshift-operators | grep recipe
recipe-controller-manager-5f947d88c7-xfr5v    1/2     CrashLoopBackOff   8 (78s ago)   17m

Pod logs:
```
flag provided but not defined: -health-probe-bind-address
Usage of /manager:
  -config string
    	The controller will load its initial configuration from this file. Omit this flag to use the default configuration values. Command-line flags override configuration from this file.
  -kubeconfig string
    	Paths to a kubeconfig. Only required if out-of-cluster.
  -zap-devel
    	Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default true)
  -zap-encoder value
    	Zap log encoding (one of 'json' or 'console')
  -zap-log-level value
    	Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
  -zap-stacktrace-level value
    	Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic').
  -zap-time-encoding value
    	Zap time encoding (one of 'epoch', 'millis', 'nano', 'iso8601', 'rfc3339' or 'rfc3339nano'). Defaults to 'epoch'.
```



Version of all relevant components (if applicable):
OCP: 4.16.0-0.nightly-2024-04-16-195622
ODF: 4.16.0-79.stable
recipe.v4.16.0-79.stable

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
Yes

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Configure RDR setup with 1 ACM hub and 2 managed clusters
2. Create DRPolicy
3. Observe the ramen-dr-cluster-operator pod status on managed clusters (goes into CLBO)
4. Install Recipe operator on managed clusters and observe the status of recipe-controller-manager pod (goes into CLBO)


Actual results:
recipe-controller-manager pod goes into CrashLoopBackOff state

Expected results:
recipe-controller-manager pod should not go into CrashLoopBackOff state


Additional info:

Comment 12 errata-xmlrpc 2024-07-17 13:20:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.16.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:4591


Note You need to log in before you can comment on or make changes to this bug.