Bug 2253953 - noobaa-core-0 in CLBO in disconnected deployment mode
Summary: noobaa-core-0 in CLBO in disconnected deployment mode
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: Multi-Cloud Object Gateway
Version: 4.15
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.15.0
Assignee: Amit Prinz Setter
QA Contact: Vijay Avuthu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-12-11 09:19 UTC by Vijay Avuthu
Modified: 2024-03-19 15:29 UTC (History)
4 users (show)

Fixed In Version: 4.15.0-87
Doc Type: No Doc Update
Doc Text:
If this bug requires documentation, please select an appropriate Doc Type value.
Clone Of:
Environment:
Last Closed: 2024-03-19 15:29:30 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github noobaa noobaa-core pull 7657 0 None open Adapt http-proxy-agent updated api 2023-12-12 12:57:08 UTC
Red Hat Product Errata RHSA-2024:1383 0 None None None 2024-03-19 15:29:33 UTC

Description Vijay Avuthu 2023-12-11 09:19:08 UTC
Description of problem (please be detailed as possible and provide log
snippests):

In AWS Disconnected deployment mode, noobaa-core-0 is in CrashLoopBackOff


Version of all relevant components (if applicable):

odf-operator.v4.15.0-84.stable


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes


Is there any workaround available to the best of your knowledge?
No


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1


Can this issue reproducible?
2/2


Can this issue reproduce from the UI?
Not tried

If this is a regression, please provide more details to justify this:
Yes

Steps to Reproduce:
1. install ODF through ocs-ci on AWS in disconnected mode
2. check storagecluster and pods
3.


Actual results:

$ oc get storagecluster
NAME                 AGE    PHASE         EXTERNAL   CREATED AT             VERSION
ocs-storagecluster   151m   Progressing              2023-12-11T06:44:54Z   4.15.0

$ oc get pod noobaa-core-0
NAME            READY   STATUS             RESTARTS       AGE
noobaa-core-0   0/1     CrashLoopBackOff   33 (94s ago)   145m



Expected results:
All pods should be in running state and storagecluster in Ready state

Additional info:

noobaa-core-0 log

2023-12-11T06:52:40.082995094Z Running /usr/local/bin/node src/upgrade/upgrade_manager.js --upgrade_scripts_dir /root/node_modules/noobaa-core/src/upgrade/upgrade_scripts
2023-12-11T06:52:40.245657625Z Error accrued while getting the data from /etc/noobaa-auth-token/auth_token: Error: ENOENT: no such file or directory, open '/etc/noobaa-auth-token/auth_token'
2023-12-11T06:52:40.249033418Z load_config_local: LOADED {
2023-12-11T06:52:40.249033418Z   DEFAULT_ACCOUNT_PREFERENCES: { ui_theme: 'LIGHT' },
2023-12-11T06:52:40.249033418Z   REMOTE_NOOAA_NAMESPACE: 'openshift-storage',
2023-12-11T06:52:40.249033418Z   ALLOW_BUCKET_CREATE_ON_INTERNAL: false
2023-12-11T06:52:40.249033418Z }
2023-12-11T06:52:40.448299286Z [32mDec-11 6:52:40.447[35m [Upgrade/20] [36m   [L0][39m UPGRADE:: upgrade manager started..
2023-12-11T06:52:40.773737561Z [32mDec-11 6:52:40.773[35m [Upgrade/20] [31m[ERROR][39m UPGRADE:: upgrade failed!! TypeError: createHttpProxyAgent is not a function
2023-12-11T06:52:40.773737561Z     at Object.<anonymous> (/root/node_modules/[4mnoobaa-core[24m/src/util/http_utils.js:31:5)
2023-12-11T06:52:40.773737561Z [90m    at Module._compile (node:internal/modules/cjs/loader:1256:14)[39m
2023-12-11T06:52:40.773737561Z [90m    at Module._extensions..js (node:internal/modules/cjs/loader:1310:10)[39m
2023-12-11T06:52:40.773737561Z [90m    at Module.load (node:internal/modules/cjs/loader:1119:32)[39m
2023-12-11T06:52:40.773737561Z [90m    at Module._load (node:internal/modules/cjs/loader:960:12)[39m
2023-12-11T06:52:40.773737561Z [90m    at Module.require (node:internal/modules/cjs/loader:1143:19)[39m
2023-12-11T06:52:40.773737561Z [90m    at require (node:internal/modules/cjs/helpers:119:18)[39m
2023-12-11T06:52:40.773737561Z     at Object.<anonymous> (/root/node_modules/[4mnoobaa-core[24m/src/rpc/rpc_ws.js:7:20)
2023-12-11T06:52:40.773737561Z [90m    at Module._compile (node:internal/modules/cjs/loader:1256:14)[39m
2023-12-11T06:52:40.773737561Z [90m    at Module._extensions..js (node:internal/modules/cjs/loader:1310:10)[39m
2023-12-11T06:52:40.779747727Z upgrade_manager failed with exit code 1
2023-12-11T06:52:40.780265090Z noobaa_init.sh finished
2023-12-11T06:52:40.783499854Z noobaa_init failed with exit code 1. aborting



job url: https://url.corp.redhat.com/11127ed

must gather: https://url.corp.redhat.com/a024884

Comment 4 Vijay Avuthu 2023-12-13 16:19:09 UTC
verified here: https://url.corp.redhat.com/4c1bb2d

logs: https://url.corp.redhat.com/7d11a08


BUILD ID: 4.15.0-87 RUN ID: 1702478674

All acceptance suite passed

Comment 8 errata-xmlrpc 2024-03-19 15:29:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:1383


Note You need to log in before you can comment on or make changes to this bug.