Bug 2157876 - [OCP Tracker] [UI] When OCP and ODF are upgraded, refresh web console pop-up doesn't appear after ODF upgrade resulting in dashboard crash
Summary: [OCP Tracker] [UI] When OCP and ODF are upgraded, refresh web console pop-up ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: management-console
Version: 4.12
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ODF 4.13.0
Assignee: Bipul Adhikari
QA Contact: Aman Agrawal
URL:
Whiteboard:
Depends On:
Blocks: 2107226 2154341 2157887
TreeView+ depends on / blocked
 
Reported: 2023-01-03 11:31 UTC by Aman Agrawal
Modified: 2023-12-08 04:31 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
.Refresh popup is shown when OpenShift Data Foundation is upgraded Previously, when OpenShift Data Foundation was upgraded, OpenShift Container Platform did not show the *Refresh* button due to lack of awareness about the changes. OpenShift used to not perform checks to know the changes in the `version` field of the `plugin-manifest.json` file present in the `odf-console` pod. With this fix, OpenShift Container Platform and OpenShift Data Foundation are configured to poll the manifest for OpenShift Data Foundation user interface. Based on the change in version a *Refresh* popup is shown.
Clone Of:
Environment:
Last Closed: 2023-06-21 15:22:55 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2023:3742 0 None None None 2023-06-21 15:23:42 UTC

Description Aman Agrawal 2023-01-03 11:31:48 UTC
Description of problem (please be detailed as possible and provide log
snippests):

Both the upgrades were done via CLI way


Version of all relevant components (if applicable):
OCP 4.11.0-0.nightly-2022-12-26-210225 and ODF 4.11.4
upgraded to
OCP 4.12.0-0.nightly-2022-12-27-111646

then ODF upgraded to 
ODF 4.12.0-152.stable


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy OCP+ODF 4.11
2. Upgrade OCP to 4.12 (via CLI in this case)
3. Refresh web console pop-up appeared, dashboard did not crash
4. Upgrade ODF to 4.12 (via CLI way in this case)
5. Refresh web console pop-up did not appear, Data Foundation dashboard under Storage section crashes with 404: Page not found error when clicked upon it


Actual results: When OCP and ODF are upgraded, refresh web console pop-up doesn't appear after ODF upgrade resulting in dashboard crash


Expected results: Data Foundation dashboard under Storage section shouldn't crash.

Output before ODF upgrade but after OCP upgraded to 4.12-

[amagrawa@amagrawa ~]$ pods
Already on project "openshift-storage" on server "https://api.amagrawa-2jan-2.qe.rh-ocs.com:6443".
NAME                                                              READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
csi-addons-controller-manager-7bc5dfb79d-854z7                    2/2     Running   0          16h   10.128.2.11   compute-2   <none>           <none>
csi-cephfsplugin-2gwqh                                            3/3     Running   3          17h   10.1.161.50   compute-0   <none>           <none>
csi-cephfsplugin-provisioner-75f84bbbd8-8b4mg                     6/6     Running   0          16h   10.131.0.16   compute-0   <none>           <none>
csi-cephfsplugin-provisioner-75f84bbbd8-ldch9                     6/6     Running   0          16h   10.128.2.6    compute-2   <none>           <none>
csi-cephfsplugin-s4w89                                            3/3     Running   3          17h   10.1.161.27   compute-2   <none>           <none>
csi-cephfsplugin-vg5lz                                            3/3     Running   3          17h   10.1.161.28   compute-1   <none>           <none>
csi-rbdplugin-dk4k2                                               4/4     Running   4          17h   10.1.161.28   compute-1   <none>           <none>
csi-rbdplugin-fmb9p                                               4/4     Running   4          17h   10.1.161.50   compute-0   <none>           <none>
csi-rbdplugin-provisioner-7454ccf6f5-22dtr                        7/7     Running   0          16h   10.128.2.15   compute-2   <none>           <none>
csi-rbdplugin-provisioner-7454ccf6f5-kgz5b                        7/7     Running   0          16h   10.131.0.13   compute-0   <none>           <none>
csi-rbdplugin-tw656                                               4/4     Running   4          17h   10.1.161.27   compute-2   <none>           <none>
noobaa-core-0                                                     1/1     Running   0          16h   10.131.0.26   compute-0   <none>           <none>
noobaa-db-pg-0                                                    1/1     Running   0          16h   10.131.0.25   compute-0   <none>           <none>
noobaa-endpoint-6f8f657f6d-c9xxk                                  1/1     Running   0          16h   10.131.0.17   compute-0   <none>           <none>
noobaa-operator-79475846c9-tqnv2                                  1/1     Running   0          16h   10.131.0.12   compute-0   <none>           <none>
ocs-metrics-exporter-66b85fd68b-zjg8g                             1/1     Running   0          16h   10.128.2.13   compute-2   <none>           <none>
ocs-operator-5cb888d76b-b2bdr                                     1/1     Running   0          16h   10.131.0.18   compute-0   <none>           <none>
odf-console-59948d686-j6ct2                                       1/1     Running   0          16h   10.128.2.14   compute-2   <none>           <none>
odf-operator-controller-manager-66c5dc5595-2djbb                  2/2     Running   0          16h   10.128.2.8    compute-2   <none>           <none>
rook-ceph-crashcollector-compute-0-54f9dfddff-8z7dl               1/1     Running   0          16h   10.131.0.8    compute-0   <none>           <none>
rook-ceph-crashcollector-compute-1-6bd6d8bccd-cvv2m               1/1     Running   0          15h   10.129.2.6    compute-1   <none>           <none>
rook-ceph-crashcollector-compute-2-655797b94f-zmt8g               1/1     Running   0          16h   10.128.2.24   compute-2   <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-79fbbd49tzrr2   2/2     Running   0          16h   10.131.0.19   compute-0   <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-599487f4wlc8w   2/2     Running   0          16h   10.128.2.10   compute-2   <none>           <none>
rook-ceph-mgr-a-66b97b5b89-2dvhz                                  2/2     Running   0          16h   10.131.0.9    compute-0   <none>           <none>
rook-ceph-mon-a-787b687c54-lnx27                                  2/2     Running   0          16h   10.131.0.21   compute-0   <none>           <none>
rook-ceph-mon-b-7cdc856fbd-6xphg                                  2/2     Running   0          15h   10.129.2.8    compute-1   <none>           <none>
rook-ceph-mon-c-5dd7db6c98-5nj55                                  2/2     Running   0          16h   10.128.2.29   compute-2   <none>           <none>
rook-ceph-operator-c5687f7c7-2dtfb                                1/1     Running   0          16h   10.128.2.16   compute-2   <none>           <none>
rook-ceph-osd-0-7547db9dfd-rfr4r                                  2/2     Running   0          15h   10.129.2.7    compute-1   <none>           <none>
rook-ceph-osd-1-6f7cb7fcf4-t64h4                                  2/2     Running   0          16h   10.131.0.7    compute-0   <none>           <none>
rook-ceph-osd-2-6c5497cfc5-njm6m                                  2/2     Running   0          16h   10.128.2.31   compute-2   <none>           <none>
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-5fb47d5cghpv   2/2     Running   0          16h   10.128.2.7    compute-2   <none>           <none>
rook-ceph-tools-6bfd9c5c8d-952dv                                  1/1     Running   0          16h   10.131.0.10   compute-0   <none>           <none>



Output after ODF upgrade-
[amagrawa@amagrawa ~]$ pods
Already on project "openshift-storage" on server "https://api.amagrawa-2jan-2.qe.rh-ocs.com:6443".
NAME                                                              READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
csi-addons-controller-manager-8645bf9997-5p2g4                    2/2     Running   0          17m   10.129.2.170   compute-1   <none>           <none>
csi-cephfsplugin-gkcvc                                            2/2     Running   0          16m   10.1.161.27    compute-2   <none>           <none>
csi-cephfsplugin-njwzw                                            2/2     Running   0          16m   10.1.161.28    compute-1   <none>           <none>
csi-cephfsplugin-provisioner-787d6c75d8-jc8pq                     5/5     Running   0          17m   10.129.2.175   compute-1   <none>           <none>
csi-cephfsplugin-provisioner-787d6c75d8-w7xcv                     5/5     Running   0          17m   10.128.2.38    compute-2   <none>           <none>
csi-cephfsplugin-vst9h                                            2/2     Running   0          17m   10.1.161.50    compute-0   <none>           <none>
csi-rbdplugin-7r9bz                                               3/3     Running   0          17m   10.1.161.28    compute-1   <none>           <none>
csi-rbdplugin-pd9lq                                               3/3     Running   0          16m   10.1.161.27    compute-2   <none>           <none>
csi-rbdplugin-provisioner-d94cd7fb7-qznh8                         6/6     Running   0          17m   10.128.2.37    compute-2   <none>           <none>
csi-rbdplugin-provisioner-d94cd7fb7-z4nnp                         6/6     Running   0          17m   10.129.2.174   compute-1   <none>           <none>
csi-rbdplugin-xmtdr                                               3/3     Running   0          16m   10.1.161.50    compute-0   <none>           <none>
noobaa-core-0                                                     1/1     Running   0          16m   10.129.2.177   compute-1   <none>           <none>
noobaa-db-pg-0                                                    1/1     Running   0          17m   10.129.2.180   compute-1   <none>           <none>
noobaa-endpoint-69d8c648cc-ggqkw                                  1/1     Running   0          13m   10.129.2.182   compute-1   <none>           <none>
noobaa-operator-558f469db8-tmfh4                                  1/1     Running   0          17m   10.129.2.176   compute-1   <none>           <none>
ocs-metrics-exporter-7f98576c58-swnqp                             1/1     Running   0          17m   10.129.2.164   compute-1   <none>           <none>
ocs-operator-8564889577-d2m85                                     1/1     Running   0          17m   10.129.2.165   compute-1   <none>           <none>
odf-console-b856b6955-p2qrn                                       1/1     Running   0          19m   10.129.2.160   compute-1   <none>           <none>
odf-operator-controller-manager-7f7c5bc8f9-rzltk                  2/2     Running   0          19m   10.129.2.159   compute-1   <none>           <none>
rook-ceph-crashcollector-compute-0-779c5f4c4b-mgc26               1/1     Running   0          16m   10.131.0.34    compute-0   <none>           <none>
rook-ceph-crashcollector-compute-1-777d49966c-8zzsg               1/1     Running   0          15m   10.129.2.178   compute-1   <none>           <none>
rook-ceph-crashcollector-compute-2-6889496c8d-s55cv               1/1     Running   0          15m   10.128.2.39    compute-2   <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-68467b958rz87   2/2     Running   0          15m   10.131.0.39    compute-0   <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-5c76bd66j8dqg   2/2     Running   0          14m   10.128.2.40    compute-2   <none>           <none>
rook-ceph-mgr-a-67d5f6d77c-ttmh5                                  2/2     Running   0          14m   10.128.2.42    compute-2   <none>           <none>
rook-ceph-mon-a-5c4d78c547-rf78v                                  2/2     Running   0          16m   10.131.0.35    compute-0   <none>           <none>
rook-ceph-mon-b-7cc4dbd945-j96xp                                  2/2     Running   0          15m   10.129.2.179   compute-1   <none>           <none>
rook-ceph-mon-c-7855db7b87-8ljvj                                  2/2     Running   0          14m   10.128.2.41    compute-2   <none>           <none>
rook-ceph-operator-55649658fb-kktcz                               1/1     Running   0          17m   10.129.2.166   compute-1   <none>           <none>
rook-ceph-osd-0-5c99784874-pxqkt                                  2/2     Running   0          13m   10.129.2.183   compute-1   <none>           <none>
rook-ceph-osd-1-56b55c8c54-l2l7v                                  2/2     Running   0          13m   10.131.0.40    compute-0   <none>           <none>
rook-ceph-osd-2-56999c8977-t977l                                  2/2     Running   0          12m   10.128.2.43    compute-2   <none>           <none>
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-74f5b972xt6s   2/2     Running   0          15m   10.131.0.38    compute-0   <none>           <none>
rook-ceph-tools-855965db79-jct9l                                  1/1     Running   0          17m   10.129.2.168   compute-1   <none>           <none>



Additional info:
Live cluster for debugging- 

Web Console: https://console-openshift-console.apps.amagrawa-2jan-2.qe.rh-ocs.com
Login: kubeadmin
Password: ymp72-fgK7j-ZFjHw-KKnbf

Kubeconfig- http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/amagrawa-2jan-2/amagrawa-2jan-2_20230102T135010/openshift-cluster-dir/auth/kubeconfig

I will attach the must-gather logs in the next comment.

Comment 3 Aman Agrawal 2023-01-03 11:49:06 UTC
Created attachment 1935465 [details]
Data Foundation dashboard crash

Must-gather logs are placed here- http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/bz-aman/3jan23/

Also refer to the attached screenshot.

Comment 4 Aman Agrawal 2023-01-03 12:06:34 UTC
As a workaround, hard refresh of the console will bring back the dashboard and it doesn't crash further (tested and confirmed). 
Reducing the severity to Medium after discussing with Bipul.

Comment 7 Bipul Adhikari 2023-01-10 09:01:19 UTC
https://issues.redhat.com/browse/OCPBUGS-5534
Registered an issue on the OCP Console side.
This is a generic issue with OCP and any operator going through the upgrade process will face this issue.

Comment 17 Bipul Adhikari 2023-03-31 08:22:06 UTC
There is nothing to triage here. This is a tracker bug.

Comment 18 Bipul Adhikari 2023-04-05 03:44:06 UTC
OCP 4.13 bug has been verified. Closing this BZ.|
`This bug has been verified on payload 4.13.0-0.nightly-2023-02-11-150735` => From the OCP BZ

Comment 19 Bipul Adhikari 2023-04-05 04:35:12 UTC
Moving to ON_QA because original bug is verified.

Comment 24 Mudit Agarwal 2023-06-02 11:07:42 UTC
Because this is fixed now, changed the doc text from Known issue to Bug fix. 
Pleas change the doc text.

Comment 27 errata-xmlrpc 2023-06-21 15:22:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:3742

Comment 28 Red Hat Bugzilla 2023-12-08 04:31:48 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.