Created attachment 1869207 [details] ReplicaSet not showing YAML Description of problem: ReplicaSet and Pods resources on Topology is showing error in the yaml on the sidebar Version-Release number of selected component (if applicable): 2.5.0-DOWNSTREAM-2022-03-29-05-04-50 How reproducible: Frequent Steps to Reproduce: 1. Deploy helloworld app 2. Once deployed, go to Topology 3. Click on the Replicaset and Pods resources 4. On the sidebar, click on YAML Actual results: Error querying for resource: helloworld-app-deploy message is seen Expected results: YAML for the selected resource should be seen Additional info:
G2Bsync 1083239788 comment fxiang1 Wed, 30 Mar 2022 14:51:44 UTC G2Bsync From the screenshots, it seems the app is still deploying(ie. status on the topology nodes are yellow) so the resource YAML may not be created yet. When the app finishes deploying (ie. status on the topology nodes are all green), do you still hit this issue?
Yes. I am still seeing this issue with the build `2.5.0-DOWNSTREAM-2022-03-29-05-04-50`. All environments having this build is showing similar error when displaying the YAML. Its also confirmed that Applications are being deployed on the backend
G2Bsync 1086055498 comment fxiang1 Fri, 01 Apr 2022 15:42:08 UTC G2Bsync I worked with Rafat and investigated the two clusters that's hitting this issue: - https://console-openshift-console.apps.rhv-cluster-06.cicd.red-chesterfield.com/dashboards - https://console-openshift-console.apps.ocp4-aws-sno-1.dev09.red-chesterfield.com/ For https://console-openshift-console.apps.rhv-cluster-06.cicd.red-chesterfield.com/dashboards, I think there's some problems with the app backend. As I couldn't get an app to deploy. Looks like permission issues is causing this which makes sense that the UI cannot show the YAML because it's not deployed.   For https://console-openshift-console.apps.ocp4-aws-sno-1.dev09.red-chesterfield.com/, I verified that the ReplicaSet YAML is showing. For the Pod YAML not showing, this is fixed on the latest build (by https://github.com/stolostron/backlog/issues/21106).   So maybe we can keep this issue open until QE is able to verify this on the latest build of ACM.
G2Bsync 1088746160 comment fxiang1 Tue, 05 Apr 2022 14:04:08 UTC G2Bsync For the subscription, placement, application data these are obtained directly from kubernetes but for the app resources data these are coming from search which mean there will be a delay even though you can see the resource in kubernetes. So I think the behaviour is expected.
G2Bsync 1089204185 comment fxiang1 Tue, 05 Apr 2022 19:05:59 UTC G2Bsync @nelsonjean I don't think clearing the cache is necessary to get the updated status. When the user clicks on the node, the first thing they see is the status saying not deployed:  So that should be clear enough to let users know that the log or YAML is not available. If you still feel we should change the message then maybe we can lower the severity, since the feature is working but we're just not providing the correct error message.
This is verified and tested in ACM 2.5.0-DOWNSTREAM-2022-04-02-04-38-33 on both RHV and ARM environments running OCP 4.10. Closing defect
Tested and verified in VMWare `2.5.0-DOWNSTREAM-2022-05-06-05-02-19` on OCP 4.10.6
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Advanced Cluster Management 2.5 security updates, images, and bug fixes), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:4956
This comment was flagged a spam, view the edit history to see the original text if required.
Thanks for share this !!! https://www.targetpayandbenefits.review/