Bug 2232555 - [RDR] token-exchange-agent pod in CrashLoopBackOff state
Summary: [RDR] token-exchange-agent pod in CrashLoopBackOff state
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.13
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ODF 4.13.3
Assignee: umanga
QA Contact: Sidhant Agrawal
URL:
Whiteboard:
Depends On: 2227017
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-08-17 10:54 UTC by Vineet
Modified: 2023-09-27 14:24 UTC (History)
4 users (show)

Fixed In Version: 4.13.3-2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2227017
Environment:
Last Closed: 2023-09-27 14:22:42 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage odf-multicluster-orchestrator pull 178 0 None open Bug 2232555: [release-4.13] Prevent CLBO issues by upgrading to k8s 0.26.4 2023-08-28 12:42:41 UTC
Red Hat Product Errata RHSA-2023:5376 0 None None None 2023-09-27 14:24:13 UTC

Description Vineet 2023-08-17 10:54:42 UTC
+++ This bug was initially created as a clone of Bug #2227017 +++

Description of problem (please be detailed as possible and provide log
snippests):
using OCP 4.14, while configuring RDR setup, after creating DRPolicy on hub cluster, observed that token-exchange-agent pods on managed clusters goes into CrashLoopBackOff state and doesn't recover.

Version of all relevant components (if applicable):
OCP: 4.14.0-0.nightly-2023-07-26-132453
ODF: 4.14.0-86
ACM: 2.9.0-62 (quay.io:443/acm-d/acm-custom-registry:2.9.0-DOWNSTREAM-2023-07-26-20-16-55)
Submariner: 0.16.0 (brew.registry.redhat.io/rh-osbs/iib:543072)

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes, DRPolicy doesn't reach Validated state

Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2

Can this issue reproducible?
1/1

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy 3 OCP 4.14 clusters
2. Install RHACM on hub, import other two clusters and connect using Submariner add-ons with globalnet enabled
3. Deploy ODF 4.14 on both managed clusters
4. Create DRPolicy on hub cluster
5. Observe the pod status


Actual results:
token-exchange-agent pod in CrashLoopBackOff state 

Expected results:
token-exchange-agent pod in Running state 

Additional info:

Pod status:
token-exchange-agent-59d8c546f4-99kcv                             0/1     CrashLoopBackOff   18 (22s ago)   67m    10.135.0.52   compute-0   <none>           <none>

Pod logs:
```
W0727 12:29:39.796683       1 cmd.go:213] Using insecure, self-signed certificates
I0727 12:29:40.021360       1 observer_polling.go:159] Starting file observer
I0727 12:29:40.087867       1 builder.go:262] tokenexchange version -
I0727 12:29:40.261135       1 token_exchanger_agent.go:58] Running "tokenexchange"
W0727 12:29:40.261743       1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.
W0727 12:29:40.261758       1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.
I0727 12:29:40.262583       1 common.go:40] Health probes server is running.
I0727 12:29:40.264325       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0727 12:29:40.264342       1 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController
I0727 12:29:40.264371       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0727 12:29:40.264399       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0727 12:29:40.264397       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0727 12:29:40.264424       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0727 12:29:40.264706       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-3350406937/tls.crt::/tmp/serving-cert-3350406937/tls.key"
I0727 12:29:40.265177       1 secure_serving.go:210] Serving securely on [::]:8443
I0727 12:29:40.265212       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x14bd0d0]

goroutine 1 [running]:
k8s.io/client-go/discovery.convertAPIResource(...)
	/remote-source/deps/gomod/pkg/mod/k8s.io/client-go.3/discovery/aggregated_discovery.go:114
k8s.io/client-go/discovery.convertAPIGroup({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00004b920, 0x15}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...})
	/remote-source/deps/gomod/pkg/mod/k8s.io/client-go.3/discovery/aggregated_discovery.go:95 +0x6f0
k8s.io/client-go/discovery.SplitGroupsAndResources({{{0xc0006d8000, 0x15}, {0xc0010ba020, 0x1b}}, {{0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...})
	/remote-source/deps/gomod/pkg/mod/k8s.io/client-go.3/discovery/aggregated_discovery.go:49 +0x125
k8s.io/client-go/discovery.(*DiscoveryClient).downloadAPIs(0xc000d567b0?)
	/remote-source/deps/gomod/pkg/mod/k8s.io/client-go.3/discovery/discovery_client.go:328 +0x3de
k8s.io/client-go/discovery.(*DiscoveryClient).GroupsAndMaybeResources(0xc000d56be0?)
	/remote-source/deps/gomod/pkg/mod/k8s.io/client-go.3/discovery/discovery_client.go:203 +0x65
k8s.io/client-go/discovery.ServerGroupsAndResources({0x2f0cae8, 0xc0011a5e30})
	/remote-source/deps/gomod/pkg/mod/k8s.io/client-go.3/discovery/discovery_client.go:413 +0x59
k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources.func1()
	/remote-source/deps/gomod/pkg/mod/k8s.io/client-go.3/discovery/discovery_client.go:376 +0x25
k8s.io/client-go/discovery.withRetries(0x2, 0xc000d56bf8)
	/remote-source/deps/gomod/pkg/mod/k8s.io/client-go.3/discovery/discovery_client.go:651 +0x71
k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources(0x0?)
	/remote-source/deps/gomod/pkg/mod/k8s.io/client-go.3/discovery/discovery_client.go:375 +0x3a
k8s.io/client-go/restmapper.GetAPIGroupResources({0x2f0cae8?, 0xc0011a5e30?})
	/remote-source/deps/gomod/pkg/mod/k8s.io/client-go.3/restmapper/discovery.go:148 +0x42
sigs.k8s.io/controller-runtime/pkg/client/apiutil.NewDynamicRESTMapper.func1()
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.6/pkg/client/apiutil/dynamicrestmapper.go:94 +0x25
sigs.k8s.io/controller-runtime/pkg/client/apiutil.(*dynamicRESTMapper).setStaticMapper(...)
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.6/pkg/client/apiutil/dynamicrestmapper.go:130
sigs.k8s.io/controller-runtime/pkg/client/apiutil.NewDynamicRESTMapper(0xc000974b40?, {0x0, 0x0, 0x2a98b4d?})
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.6/pkg/client/apiutil/dynamicrestmapper.go:110 +0x182
sigs.k8s.io/controller-runtime/pkg/client.newClient(0xc000974b40?, {0xc000bbbab0?, {0x0?, 0x0?}, {0x0?, 0x0?}})
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.6/pkg/client/client.go:109 +0x1d1
sigs.k8s.io/controller-runtime/pkg/client.New(...)
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.6/pkg/client/client.go:77
github.com/red-hat-storage/odf-multicluster-orchestrator/addons/token-exchange.getClient(0x41416a?)
	/remote-source/app/addons/token-exchange/manager.go:51 +0x29
github.com/red-hat-storage/odf-multicluster-orchestrator/addons/token-exchange.registerHandler({0x7ffc992dab08?, 0xc000a1a4e0?}, 0x1bf08eb000?, 0xc000d571b8?)
	/remote-source/app/addons/token-exchange/secret_exchange_handler_register.go:29 +0x109
github.com/red-hat-storage/odf-multicluster-orchestrator/addons/token-exchange.(*AgentOptions).RunAgent(0xc000cb17a0, {0x2efdc50?, 0xc000cea0a0}, 0xc00102b100)
	/remote-source/app/addons/token-exchange/token_exchanger_agent.go:83 +0x35a
github.com/openshift/library-go/pkg/controller/controllercmd.(*ControllerBuilder).Run(0xc000d02120, {0x2efdc50?, 0xc000cea0a0}, 0x0)
	/remote-source/deps/gomod/pkg/mod/github.com/openshift/library-go.0-20230127195720-edf819b079cf/pkg/controller/controllercmd/builder.go:311 +0x15bb
github.com/openshift/library-go/pkg/controller/controllercmd.(*ControllerCommandConfig).StartController(0xc000bea9c0, {0x2efdc50?, 0xc0009fe000})
	/remote-source/deps/gomod/pkg/mod/github.com/openshift/library-go.0-20230127195720-edf819b079cf/pkg/controller/controllercmd/cmd.go:294 +0x625
github.com/openshift/library-go/pkg/controller/controllercmd.(*ControllerCommandConfig).NewCommandWithContext.func1(0xc000cf6000?, {0x2a79e2f?, 0x4?, 0x4?})
	/remote-source/deps/gomod/pkg/mod/github.com/openshift/library-go.0-20230127195720-edf819b079cf/pkg/controller/controllercmd/cmd.go:137 +0x3eb
github.com/spf13/cobra.(*Command).execute(0xc000cf6000, {0xc000ce89c0, 0x4, 0x4})
	/remote-source/deps/gomod/pkg/mod/github.com/spf13/cobra.1/command.go:920 +0x847
github.com/spf13/cobra.(*Command).ExecuteC(0x44f3fc0)
	/remote-source/deps/gomod/pkg/mod/github.com/spf13/cobra.1/command.go:1044 +0x3bd
github.com/spf13/cobra.(*Command).Execute(...)
	/remote-source/deps/gomod/pkg/mod/github.com/spf13/cobra.1/command.go:968
github.com/red-hat-storage/odf-multicluster-orchestrator/cmd.Execute()
	/remote-source/app/cmd/root.go:22 +0x25
main.main()
	/remote-source/app/main.go:31 +0x17
```

--- Additional comment from RHEL Program Management on 2023-07-27 13:00:18 UTC ---

This bug having no release flag set previously, is now set with release flag 'odf‑4.14.0' to '?', and so is being proposed to be fixed at the ODF 4.14.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag.

--- Additional comment from RHEL Program Management on 2023-07-27 13:00:18 UTC ---

Since this bug has severity set to 'urgent', it is being proposed as a blocker for the currently set release flag. Please resolve ASAP.

--- Additional comment from krishnaram Karthick on 2023-07-27 14:41:04 UTC ---

Adding 'testblocker' as we are blocked with RDR cluster deployment.

--- Additional comment from umanga on 2023-07-31 11:24:57 UTC ---

Looks like the issue is caused by a kubernetes dependency. We are upgrading to a version which has the fix.

--- Additional comment from RHEL Program Management on 2023-07-31 11:25:06 UTC ---

This BZ is being approved for ODF 4.14.0 release, upon receipt of the 3 ACKs (PM,Devel,QA) for the release flag 'odf‑4.14.0

--- Additional comment from RHEL Program Management on 2023-07-31 11:25:06 UTC ---

Since this bug has been approved for ODF 4.14.0 release, through release flag 'odf-4.14.0+', the Target Release is being set to 'ODF 4.14.0

--- Additional comment from errata-xmlrpc on 2023-08-01 04:12:18 UTC ---

This bug has been added to advisory RHBA-2023:115514 by ceph-build service account (ceph-build.COM)

--- Additional comment from Sidhant Agrawal on 2023-08-02 04:48:24 UTC ---

Version used:
OCP: 4.14.0-0.nightly-2023-07-31-181848
ODF: 4.14.0-93
ACM: 2.9.0-62 (quay.io:443/acm-d/acm-custom-registry:2.9.0-DOWNSTREAM-2023-07-31-16-30-30)
Submariner: 0.16.0 (brew.registry.redhat.io/rh-osbs/iib:543072)


Output from managed clusters:

C1:
$ oc get pod -n openshift-storage | grep token
token-exchange-agent-6cbcbb7bc4-j8qxr                             1/1     Running     0             83m

C2:
$ oc get pod -n openshift-storage | grep token
token-exchange-agent-5fd4d6844c-2twn2                             1/1     Running     0             83m


Verified token-exchange-agent pods don't go into CrashLoopBackOff state with latest ODF builds.

--- Additional comment from Red Hat Bugzilla on 2023-08-03 08:30:42 UTC ---

Account disabled by LDAP Audit

Comment 16 errata-xmlrpc 2023-09-27 14:22:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.13.3 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:5376


Note You need to log in before you can comment on or make changes to this bug.