Bug 1888363 - namespaces crash in dev
Summary: namespaces crash in dev
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Management Console
Version: 4.7
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 4.7.0
Assignee: Filip Krepinsky
QA Contact: Yadan Pei
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-14 16:41 UTC by Filip Krepinsky
Modified: 2021-02-24 15:26 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:26:11 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift console pull 6926 0 None closed Bug 1888363: log namespaces errors instead of crashing in dev 2021-01-08 12:16:48 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:26:40 UTC

Description Filip Krepinsky 2020-10-14 16:41:59 UTC
When prometheus is not installed (e.g. on clear crc cluster), namespaces page crashes with 

namespace.jsx:133 Error: Service Unavailable
    at validateStatus (co-fetch.js:44)
    at co-fetch.js:112

this is mainly painful when running e2e-tests.


Expected results:
k8s/cluster/namespaces does not crash without prometheus in dev

Comment 2 Yadan Pei 2020-10-21 08:56:05 UTC
Hi Filip,

I tried to verify this bug with disabling Prometheus after OCP cluster is up. 

1. Disable Prometheus pods 
# cat <<EOF >version-patch-first-override.yaml
> - op: add
>   path: /spec/overrides
>   value:
>   - kind: Deployment
>     group: apps/v1
>     name: cluster-monitoring-operator
>     namespace: openshift-monitoring
>     unmanaged: true
> EOF

# oc patch clusterversion version --type json -p "$(cat version-patch-first-override.yaml)"

# oc scale deployment cluster-monitoring-operator --replicas=0 -n openshift-monitoring
deployment.apps/cluster-monitoring-operator scaled
# oc scale deployment grafana --replicas=0 -n openshift-monitoring
deployment.apps/grafana scaled
# oc scale deployment kube-state-metrics --replicas=0 -n openshift-monitoring
deployment.apps/kube-state-metrics scaled
# oc scale deployment openshift-state-metrics --replicas=0 -n openshift-monitoring
deployment.apps/openshift-state-metrics scaled
# oc scale deployment prometheus-adapter --replicas=0 -n openshift-monitoring
deployment.apps/prometheus-adapter scaled
# oc scale deployment prometheus-operator --replicas=0 -n openshift-monitoring
deployment.apps/prometheus-operator scaled
# oc scale deployment telemeter-client --replicas=0 -n openshift-monitoring
deployment.apps/telemeter-client scaled
# oc scale deployment thanos-querier --replicas=0 -n openshift-monitoring
deployment.apps/thanos-querier scaled
# oc scale statefulset alertmanager-main --replicas=0 -n openshift-monitoring
statefulset.apps/alertmanager-main scaled
# oc scale statefulset prometheus-k8s --replicas=0 -n openshift-monitoring
statefulset.apps/prometheus-k8s scaled

2. Visit k8s/cluster/namespaces URL, I can see js errors in browser console, but the page did not crash

Unable to fetch pod metrics Error: Bad Gateway
    u main-chunk-6b551ed9b809b1b3d9c7.min.js:1
    s main-chunk-6b551ed9b809b1b3d9c7.min.js:1
main-chunk-6b551ed9b809b1b3d9c7.min.js:1

GEThttps://console-openshift-console.xxxx.openshift.com/api/prometheus/api/v1/rules
[HTTP/1.1 502 Bad Gateway 30332ms]

XHRGEThttps://console-openshift-console.xxxx.openshift.com/api/prometheus/api/v1/query?&query=sum%20by(namespace)%20(container_memory_working_set_bytes{container=%22%22,pod!=%22%22})
[HTTP/1.1 504 Gateway Time-out 30332ms]

XHRGEThttps://console-openshift-console.xxxx.openshift.com/api/prometheus/api/v1/query?&query=namespace:container_cpu_usage:sum
[HTTP/1.1 502 Bad Gateway 30333ms]

Comment 3 Yadan Pei 2020-10-21 08:57:06 UTC
Do you think I should also verify in `dev` mode since I see you said `k8s/cluster/namespaces does not crash without prometheus in dev`

Comment 4 Filip Krepinsky 2020-10-21 12:46:10 UTC
It was crashing only in development mode when using webpack dev server (https://github.com/openshift/console#frontend-development) - which has a more strict of handling errors. Anyway the errors should still be visible in the browser's console.

Comment 5 Yadan Pei 2020-10-23 03:16:06 UTC
1. Follow the steps in comment 2 to disable cluster monitoring
2. Then follow the steps in https://github.com/openshift/console#frontend-development to run in a dev mode
3. Visit http://localhost:9000/k8s/cluster/namespaces && http://localhost:9000/k8s/cluster/namespaces/<one_namespace> page, console doesn’t crash and errors are shown in browser’s console

Verified on 4.7.0-0.nightly-2020-10-22-175439

Comment 8 errata-xmlrpc 2021-02-24 15:26:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.