+++ This bug was initially created as a clone of Bug #2209364 +++ Description of problem (please be detailed as possible and provide log snippests): Version of all relevant components (if applicable): OCP+ODF 4.12 upgarded to OCP 4.13.0-0.nightly-2023-05-22-181752 & ODF v4.13.0-203.stable Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. When OCP+ODF 4.12 cluster is upgraded to OCP+ODF 4.13, after ODF upgrade completes and UI dashboard validation is done, ODF dashboard crashes with error Oh no!Something went wrong. 2. 3. Actual results: ODF dashboard crashes when OCP and ODF are upgraded Screenshots-http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/amagrawa-23may/amagrawa-23may_20230523T093509/logs/ui_logs_dir_1684838391/screenshots_ui/test_upgrade/ Console logs- https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/24790/consoleFull Search with tests/ecosystem/upgrade/test_upgrade.py Expected results: ODF dashboard shouldn't crash with either OCP or ODF or both are upgraded. If possible, introduce web console refresh pop-up after ODF upgrad to avoid this issue as it has been consistent across different OCP+ODF versions. Additional info: --- Additional comment from RHEL Program Management on 2023-05-23 16:04:37 UTC --- This bug having no release flag set previously, is now set with release flag 'odf‑4.13.0' to '?', and so is being proposed to be fixed at the ODF 4.13.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag. --- Additional comment from RHEL Program Management on 2023-05-23 16:04:37 UTC --- Since this bug has severity set to 'urgent', it is being proposed as a blocker for the currently set release flag. Please resolve ASAP. --- Additional comment from Bipul Adhikari on 2023-05-24 07:20:30 UTC --- A simple refresh should do the trick. Bringing down the severity to high. --- Additional comment from Bipul Adhikari on 2023-05-24 13:49:08 UTC --- We have made changes on the ODF build side to add version details. But looks like the build steps are not passing the correct ODF version as an environment variable to the odf build step. The change required is as follows: While running `yarn build` step an additional environment variable needs to be passed. `PLUGIN_VERSION=x.y.z` The version should match the ODF version after this is done console will be able to pickup changes in version and trigger the pop-up for refresh. Moving this to the build team. --- Additional comment from Boris Ranto on 2023-05-25 10:15:35 UTC --- We are already setting the variables correctly, see e.g. here in the build log: https://download.eng.bos.redhat.com/brewroot/packages/odf-console-container/v4.13.0/80/data/logs/x86_64.log We were using lowercase `env` command to set it in this build so I fixed it (that is unrelated to the error, it is just a recommendation to use uppercase commands in Dockerfiles) in this commit: https://gitlab.cee.redhat.com/ceph/rhodf/-/commit/90e12fe88bb3f5b60a0b4c0435731eb6518c6384 Setting back to management-console component. --- Additional comment from Bipul Adhikari on 2023-05-26 08:14:13 UTC --- When inspecting the builds the version is being set to default (0.0.0) and not to the correct version. Even if the env variable is being set it's not being passed to the child process. Needs to fixed on the build side. Moving it to build. --- Additional comment from Boris Ranto on 2023-05-27 07:09:02 UTC --- That is not the issue, we are setting the variable correctly. I have actually added `env` output in the RUN command that also runs `yarn build` to confirm in the following build (env command only lists exported env variables). Hence, there must be something in the code that is failing to set the version, the variable is set and exported properly as can be seen here: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=52899334 or e.g. directly in the x86_64 log: https://download.eng.bos.redhat.com/brewroot/work/tasks/9334/52899334/x86_64.log However, when you go inside that container # podman run --pull always -it --entrypoint /bin/sh registry-proxy.engineering.redhat.com/rh-osbs/odf4-odf-console-rhel9:rhodf-4.13-rhel-9-containers-candidate-36079-20230526154811 sh-5.1$ head -3 plugin-manifest.json { "name": "odf-console", "version": "0.0.0", sh-5.1$ you can see that the version is not really updated. --- Additional comment from Bipul Adhikari on 2023-05-30 10:13:35 UTC --- There indeed was an issue in the ODF console code base. Sending a patch on our side. --- Additional comment from RHEL Program Management on 2023-05-30 10:20:03 UTC --- This BZ is being approved for ODF 4.13.0 release, upon receipt of the 3 ACKs (PM,Devel,QA) for the release flag 'odf‑4.13.0 --- Additional comment from RHEL Program Management on 2023-05-30 10:20:03 UTC --- Since this bug has been approved for ODF 4.13.0 release, through release flag 'odf-4.13.0+', the Target Release is being set to 'ODF 4.13.0 --- Additional comment from errata-xmlrpc on 2023-05-30 12:23:37 UTC --- This bug has been added to advisory RHBA-2023:108078 by ceph-build service account (ceph-build.COM) --- Additional comment from Aman Agrawal on 2023-06-09 08:54:14 UTC --- Now when OCP & ODF are upgraded, web console refresh popup is seen on UI after each upgrade completes- http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/amagrawa-07june/amagrawa-07june_20230607T104051/logs/ui_logs_dir_1686138734/screenshots_ui/ This allows the console changes to reflect and thus avoids ODF dashboard crash after the OCP or ODF upgrade. Console logs- https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/25397/consoleFull Verified on ODF 4.13.0-214. (The run failed due to some other issue unrelated to this BZ). Bipul, shall we backport this fix? --- Additional comment from Bipul Adhikari on 2023-06-13 08:32:10 UTC --- It is a good candidate for back-port.
PRs should be merged in a few hours.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.12.5 security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:4287