Bug 2069895 - Application Lifecycle - Replicaset and Pods gives error messages when Yaml is selected on sidebar
Summary: Application Lifecycle - Replicaset and Pods gives error messages when Yaml is...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Advanced Cluster Management for Kubernetes
Classification: Red Hat
Component: Console
Version: rhacm-2.5
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: rhacm-2.5
Assignee: Feng Xiang
QA Contact: Almen Ng
Christopher Dawson
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-03-30 01:24 UTC by Rafat Islam
Modified: 2023-06-21 11:18 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-06-09 02:10:11 UTC
Target Upstream Version:
Embargoed:
bot-tracker-sync: rhacm-2.5+


Attachments (Terms of Use)
ReplicaSet not showing YAML (424.69 KB, image/png)
2022-03-30 01:24 UTC, Rafat Islam
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github stolostron backlog issues 21266 0 None None None 2022-03-30 06:09:09 UTC
Red Hat Product Errata RHSA-2022:4956 0 None None None 2022-06-09 02:10:19 UTC

Description Rafat Islam 2022-03-30 01:24:27 UTC
Created attachment 1869207 [details]
ReplicaSet not showing YAML

Description of problem:
ReplicaSet and Pods resources on Topology is showing error in the yaml on the sidebar

Version-Release number of selected component (if applicable):
2.5.0-DOWNSTREAM-2022-03-29-05-04-50

How reproducible:
Frequent

Steps to Reproduce:
1. Deploy helloworld app
2. Once deployed, go to Topology
3. Click on the Replicaset and Pods resources
4. On the sidebar, click on YAML

Actual results:
Error querying for resource: helloworld-app-deploy message is seen 

Expected results:
YAML for the selected resource should be seen

Additional info:

Comment 2 bot-tracker-sync 2022-03-30 15:22:22 UTC
G2Bsync 1083239788 comment 
 fxiang1 Wed, 30 Mar 2022 14:51:44 UTC 
 G2Bsync
From the screenshots, it seems the app is still deploying(ie. status on the topology nodes are yellow) so the resource YAML may not be created yet. When the app finishes deploying (ie. status on the topology nodes are all green), do you still hit this issue?

Comment 3 Rafat Islam 2022-03-31 21:06:00 UTC
Yes. I am still seeing this issue with the build `2.5.0-DOWNSTREAM-2022-03-29-05-04-50`. All environments having this build is showing similar error when displaying the YAML. Its also confirmed that Applications are being deployed on the backend

Comment 5 bot-tracker-sync 2022-04-01 18:26:06 UTC
G2Bsync 1086055498 comment 
 fxiang1 Fri, 01 Apr 2022 15:42:08 UTC 
 G2Bsync
I worked with Rafat and investigated the two clusters that's hitting this issue:
- https://console-openshift-console.apps.rhv-cluster-06.cicd.red-chesterfield.com/dashboards
- https://console-openshift-console.apps.ocp4-aws-sno-1.dev09.red-chesterfield.com/

For https://console-openshift-console.apps.rhv-cluster-06.cicd.red-chesterfield.com/dashboards, I think there's some problems with the app backend. As I couldn't get an app to deploy. Looks like permission issues is causing this which makes sense that the UI cannot show the YAML because it's not deployed.

![image](https://user-images.githubusercontent.com/38960034/161297035-1d43226d-babc-4457-88c4-9f81fc9a4a1a.png)
![image](https://user-images.githubusercontent.com/38960034/161297075-c4aa7ea9-fd7b-43b6-9795-a573b72af733.png)

For https://console-openshift-console.apps.ocp4-aws-sno-1.dev09.red-chesterfield.com/, I verified that the ReplicaSet YAML is showing. For the Pod YAML not showing, this is fixed on the latest build (by https://github.com/stolostron/backlog/issues/21106).

![image](https://user-images.githubusercontent.com/38960034/161297116-e071411a-eed5-42b6-8d5f-d1576c30167e.png)
![image](https://user-images.githubusercontent.com/38960034/161297144-e7250cea-752d-482f-9177-70e83a66af93.png)


So maybe we can keep this issue open until QE is able to verify this on the latest build of ACM.

Comment 8 bot-tracker-sync 2022-04-05 15:25:16 UTC
G2Bsync 1088746160 comment 
 fxiang1 Tue, 05 Apr 2022 14:04:08 UTC 
 G2Bsync

For the subscription, placement, application data these are obtained directly from kubernetes but for the app resources data these are coming from search which mean there will be a delay even though you can see the resource in kubernetes. So I think the behaviour is expected.

Comment 10 bot-tracker-sync 2022-04-05 19:37:02 UTC
G2Bsync 1089204185 comment 
 fxiang1 Tue, 05 Apr 2022 19:05:59 UTC 
 G2Bsync

@nelsonjean I don't think clearing the cache is necessary to get the updated status.

When the user clicks on the node, the first thing they see is the status saying not deployed:
![image](https://user-images.githubusercontent.com/38960034/161828985-cd2a6b11-1a6a-475d-8d91-7dfe1a53eda7.png)

So that should be clear enough to let users know that the log or YAML is not available.

If you still feel we should change the message then maybe we can lower the severity, since the feature is working but we're just not providing the correct error message.

Comment 12 Rafat Islam 2022-04-07 14:03:16 UTC
This is verified and tested in ACM 2.5.0-DOWNSTREAM-2022-04-02-04-38-33 on both RHV and ARM environments running OCP 4.10. Closing defect

Comment 14 Almen Ng 2022-05-10 20:48:55 UTC
Tested and verified in VMWare `2.5.0-DOWNSTREAM-2022-05-06-05-02-19` on OCP 4.10.6

Comment 18 errata-xmlrpc 2022-06-09 02:10:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Advanced Cluster Management 2.5 security updates, images, and bug fixes), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:4956

Comment 19 Rosestelzer 2022-12-31 09:07:01 UTC Comment hidden (spam)
Comment 22 ra.mo.nfergusonw55005 2023-06-21 11:18:58 UTC Comment hidden (spam)

Note You need to log in before you can comment on or make changes to this bug.