Hide Forgot
Created attachment 1849321 [details] 4.9.13 dashborad, no issue for the graph Description of problem: 4.9.13, login grafana route, all graphs show correctly for each dashboard, upgrade to 4.10.0-0.nightly-2022-01-05-181126, 403 Forbidden error shows for all the graphs in each dashboard. and checked in a fresh 4.10.0-0.nightly-2022-01-05-181126 cluster, no such error, the error is only with 4.9 upgrade to 4.10 NOTE: all the attached pictures take etcd dashboard as examples # oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.13 True False 10m Cluster version is 4.9.13 # oc -n openshift-monitoring get pod | grep grafana grafana-79f8447cbb-vgwf8 2/2 Running 0 23m no errors in the grafana/grafana-proxy containers upgrade from 4.9.13 to 4.10.0-0.nightly-2022-01-05-181126 # oc adm upgrade --to-image=registry.ci.openshift.org/ocp/release:4.10.0-0.nightly-2022-01-05-181126 --force=true --allow-explicit-upgrade=true # oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.0-0.nightly-2022-01-05-181126 True False 5m Cluster version is 4.10.0-0.nightly-2022-01-05-181126 # oc -n openshift-monitoring get pod | grep grafana grafana-5947fc4ffd-ktv9l 3/3 Running 0 22m 403 error in the grafana container, full log see from the must-gather # oc -n openshift-monitoring logs grafana-5947fc4ffd-ktv9l -c grafana ... t=2022-01-06T18:06:57+0000 lvl=warn msg="[Deprecated] the use of basicAuthPassword field is deprecated. Please use secureJsonData.basicAuthPassword" logger=provisioning.datasources datasource name=prometheus t=2022-01-06T18:06:57+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=prometheus uid= t=2022-01-06T18:06:57+0000 lvl=eror msg="Failed to read plugin provisioning files from directory" logger=provisioning.plugins path=/etc/grafana/provisioning/plugins error="open /etc/grafana/provisioning/plugins: no such file or directory" t=2022-01-06T18:06:57+0000 lvl=eror msg="Can't read alert notification provisioning files from directory" logger=provisioning.notifiers path=/etc/grafana/provisioning/notifiers error="open /etc/grafana/provisioning/notifiers: no such file or directory" t=2022-01-06T18:06:57+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=127.0.0.1:3001 protocol=http subUrl= socket= t=2022-01-06T18:07:07+0000 lvl=info msg="Request Completed" logger=context userId=2 orgId=1 uname=kube:admin method=GET path=/api/datasources/proxy/1/api/v1/series status=403 remote_addr="209.132.188.14, 10.131.0.43" time_ms=222 size=85975 referer="https://grafana-openshift-monitoring.apps.juzhao-49.qe.devcluster.openshift.com/d/c2f4e12cdf69feb95caa41a5a1b423d9/etcd?orgId=1&refresh=10s" t=2022-01-06T18:07:21+0000 lvl=info msg="Request Completed" logger=context userId=2 orgId=1 uname=kube:admin method=GET path=/api/datasources/proxy/1/api/v1/series status=403 remote_addr="209.132.188.14, 10.129.2.12" time_ms=8 size=85975 referer="https://grafana-openshift-monitoring.apps.juzhao-49.qe.devcluster.openshift.com/d/c2f4e12cdf69feb95caa41a5a1b423d9/etcd?orgId=1&refresh=10s" t=2022-01-06T18:07:24+0000 lvl=info msg="Request Completed" logger=context userId=2 orgId=1 uname=kube:admin method=GET path=/api/datasources/proxy/1/api/v1/query_range status=403 remote_addr="209.132.188.14, 10.129.2.12" time_ms=2 size=86051 referer="https://grafana-openshift-monitoring.apps.juzhao-49.qe.devcluster.openshift.com/d/c2f4e12cdf69feb95caa41a5a1b423d9/etcd?orgId=1&refresh=10s" t=2022-01-06T18:07:24+0000 lvl=info msg="Request Completed" logger=context userId=2 orgId=1 uname=kube:admin method=GET path=/api/datasources/proxy/1/api/v1/query_range status=403 remote_addr="209.132.188.14, 10.129.2.12" time_ms=3 size=86139 referer="https://grafana-openshift-monitoring.apps.juzhao-49.qe.devcluster.openshift.com/d/c2f4e12cdf69feb95caa41a5a1b423d9/etcd?orgId=1&refresh=10s" t=2022-01-06T18:07:25+0000 lvl=info msg="Request Completed" logger=context userId=2 orgId=1 uname=kube:admin method=GET path=/api/datasources/proxy/1/api/v1/query_range status=403 remote_addr="209.132.188.14, 10.129.2.12" time_ms=2 size=86389 referer="https://grafana-openshift-monitoring.apps.juzhao-49.qe.devcluster.openshift.com/d/c2f4e12cdf69feb95caa41a5a1b423d9/etcd?orgId=1&refresh=10s" no error in grafana-proxy # oc -n openshift-monitoring logs grafana-5947fc4ffd-ktv9l -c grafana-proxy 2022/01/06 18:06:55 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:grafana 2022/01/06 18:06:55 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token 2022/01/06 18:06:55 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates. 2022/01/06 18:06:58 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:3001/" 2022/01/06 18:06:58 oauthproxy.go:230: OAuthProxy configured for Client ID: system:serviceaccount:openshift-monitoring:grafana 2022/01/06 18:06:58 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled I0106 18:06:58.579219 1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key 2022/01/06 18:06:58 http.go:107: HTTPS: listening on [::]:3000 login grafana route, check dashboard, find 403 error for all graphs, from the attached picture, besides the 403 API error, move the mouse to the "!" in the graph, shows the following info Templating Template variable service failed <!DOCTYPE html> <html lang="en" charset="utf-8"> <head> <title>Log In</title> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no"> <style> @font-face { font-family: "Open Sans"; src: url(data:application/x-font-woff;charset=utf- .... it seems there is issue for the authentication. Version-Release number of selected component (if applicable): 4.9.13 upgrade to 4.10.0-0.nightly-2022-01-05-181126, NOTE: 4.9.13 uses Grafana 7.5.5, 4.10 uses 7.5.11 How reproducible: always Steps to Reproduce: 1. 4.9.13, login grafana route, check the graphs are normal for each dashboard 2. upgrade to 4.10.0-0.nightly-2022-01-05-181126, check the graphs again 3. Actual results: all graphs in each dashboard is 403 Forbidden error after upgrade to 4.10 Expected results: no error for the dashboard Additional info:
Marked as blocker+ since it's a regression and the Grafana service isn't available anymore.
@Junqi have you tried to refresh the page. It might be because the grafana pod has restarted during the upgrade and has lost its local data?
Template variable service failed <!DOCTYPE html> <html lang="en" charset="utf-8"> <head> <title>Log In</title> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no"> <style> @font-face { font-family: "Open Sans"; src: url(data:application/x-font-woff;charset=utf- Based on this log, it seems like an error from the oauth proxy. I will try and reproduce the error.
(In reply to Simon Pasquier from comment #6) > @Junqi have you tried to refresh the page. It might be because the grafana > pod has restarted during the upgrade and has lost its local data? refresh doesn't help, still 403 error
The error is reproducible and there are error logs in the prometheus oauth-proxy. Working on figuring out the root cause.
upgrade from 4.9.13 to 4.10.0-0.nightly-2022-01-11-014938, after upgrade, grafana dashboards can show data for the graphs
This problem is reproduced on Power platform with build: https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp-dev-preview/4.10.0-fc.1/, on upgrading from OCP 4.9.15. garafana UI does not show any data, shows "Forbidden". grafana pod did NOT restart root@zsnxt-2760-bastion-0 ~]# oc get pods -n openshift-monitoring NAME READY STATUS RESTARTS AGE alertmanager-main-0 6/6 Running 0 4h51m alertmanager-main-1 6/6 Running 0 4h44m cluster-monitoring-operator-96d8ffc66-dn28x 2/2 Running 0 4h31m grafana-85896bbc5d-cc7mz 3/3 Running 0 4h44m kube-state-metrics-84f498c4d5-gf54c 3/3 Running 1 (4h44m ago) 4h44m node-exporter-8zrzz 2/2 Running 2 5h12m node-exporter-b6lvf 2/2 Running 2 5h13m node-exporter-kvxw5 2/2 Running 2 5h12m node-exporter-spmgq 2/2 Running 2 5h11m node-exporter-x5wxz 2/2 Running 2 5h12m openshift-state-metrics-58d99989b4-bl28b 3/3 Running 0 4h44m prometheus-adapter-f8848d5cc-m9fjp 1/1 Running 0 30m prometheus-adapter-f8848d5cc-v6qv2 1/1 Running 0 30m prometheus-k8s-0 6/6 Running 0 4h51m prometheus-k8s-1 6/6 Running 0 4h44m prometheus-operator-7c7dc7d876-rftgl 2/2 Running 1 (4h29m ago) 4h31m telemeter-client-7bd665c9dc-t456b 3/3 Running 0 4h44m thanos-querier-8485d999d4-b42d2 6/6 Running 0 4h44m thanos-querier-8485d999d4-m4g5j 6/6 Running 0 4h51m
Hello Julie, would you be able to provide logs from the prometheus oauth-proxy?
upgraded from 4.9.15 to 4.10.0-fc.1, no error for grafana, I suggest we close this bug and open one new bug for ppcle64 cluster # oc get clusterversion version -oyaml ... history: - completionTime: "2022-01-18T08:04:33Z" image: registry.ci.openshift.org/ocp/release@sha256:9f3ac86ba907abba3ffbae580433218eef3f1934c3353caf331587ac7c450ff0 startedTime: "2022-01-18T07:02:38Z" state: Completed verified: true version: 4.10.0-fc.1 - completionTime: "2022-01-18T06:38:53Z" image: quay.io/openshift-release-dev/ocp-release@sha256:bb1987fb718f81fb30bec4e0e1cd5772945269b77006576b02546cf84c77498e startedTime: "2022-01-18T06:20:17Z" state: Completed verified: false version: 4.9.15 # oc -n openshift-monitoring logs -c grafana-proxy grafana-6857495cf4-nk4m7 2022/01/18 07:49:23 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:grafana 2022/01/18 07:49:23 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token 2022/01/18 07:49:23 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates. 2022/01/18 07:49:30 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:3001/" 2022/01/18 07:49:30 oauthproxy.go:230: OAuthProxy configured for Client ID: system:serviceaccount:openshift-monitoring:grafana 2022/01/18 07:49:30 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled 2022/01/18 07:49:30 http.go:107: HTTPS: listening on [::]:3000 I0118 07:49:30.606074 1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key
(In reply to Junqi Zhao from comment #15) > upgraded from 4.9.15 to 4.10.0-fc.1, no error for grafana upgraded from 4.9.15 to 4.10.0-fc.1 in AWS cluster, no error for grafana
(In reply to Prashant Balachandran from comment #14) > Hello Julie, would you be able to provide logs from the prometheus > oauth-proxy? Logs from grafana-proxy is attached here.
Created attachment 1851548 [details] grafana-proxy-logs-on-power
Can you provide the must gather for this cluster? I tried on AWS and it is not reproducible.
(In reply to Prashant Balachandran from comment #20) > Can you provide the must gather for this cluster? I tried on AWS and it is > not reproducible. We lost that cluster unfortunately. Deployed a fresh new 4.9.15 cluster on the same Power test environment, and got it upgraded to 4.10.fc1 build. Grafana dashboard is showing data, and graphs are visible now. NOT able to reproduce the issue. Anyway, I am attaching all the relevant data here (in case you want to compare the pod logs on this new cluster with that of old one). must-gather logs: https://drive.google.com/drive/folders/1L-zmdZ0Pq-GOEjO6tRiTKaR-WjaEblIJ?usp=sharing [root@varad-9826-bastion-0 e2e_tests_results]# oc version Client Version: 4.9.15 Server Version: 4.10.0-fc.1 Kubernetes Version: v1.23.0+50f645e [root@varad-9826-bastion-0 ~]# oc get pods -n openshift-monitoring NAME READY STATUS RESTARTS AGE alertmanager-main-0 6/6 Running 0 12h alertmanager-main-1 6/6 Running 0 12h cluster-monitoring-operator-96d8ffc66-p85lg 2/2 Running 0 12h grafana-d588df7db-jbmgf 3/3 Running 0 12h kube-state-metrics-84f498c4d5-vrqlf 3/3 Running 0 12h node-exporter-5b8l2 2/2 Running 2 12h node-exporter-p4lp2 2/2 Running 2 12h node-exporter-qtbhb 2/2 Running 2 12h node-exporter-txzm6 2/2 Running 2 12h node-exporter-zmvs4 2/2 Running 2 12h openshift-state-metrics-58d99989b4-q8bjw 3/3 Running 0 12h prometheus-adapter-b5b84b88f-hjs7m 1/1 Running 0 117m prometheus-adapter-b5b84b88f-tw57t 1/1 Running 0 117m prometheus-k8s-0 6/6 Running 0 12h prometheus-k8s-1 6/6 Running 0 12h prometheus-operator-7c7dc7d876-plz9n 2/2 Running 0 12h telemeter-client-7d849bcff4-xs589 3/3 Running 0 12h thanos-querier-7bd4d5f698-6zdwt 6/6 Running 0 12h thanos-querier-7bd4d5f698-rt9qf 6/6 Running 0 12h grafana-pod-logs-on-new-cluster is attached here.
Created attachment 1851829 [details] grafana-pod-logs-on-new-cluster
the fix is in 4.10.0-0.nightly-2022-01-19-212639 and later builds, upgrade from 4.9.15 to 4.10.0-0.nightly-2022-01-19-212639, no error for grafana dashboard also checked in a fresh 4.10.0-0.nightly-2022-01-19-212639 cluster, no error for grafana dashboard either # oc get clusterversion -oyaml ... history: - completionTime: "2022-01-20T02:15:49Z" image: registry.ci.openshift.org/ocp/release@sha256:9633ec18f1ab43dd3c02d391db0f178deb698b5e708222089d063b181eb7add4 startedTime: "2022-01-20T01:11:04Z" state: Completed verified: false version: 4.10.0-0.nightly-2022-01-19-212639 - completionTime: "2022-01-20T00:56:49Z" image: quay.io/openshift-release-dev/ocp-release@sha256:bb1987fb718f81fb30bec4e0e1cd5772945269b77006576b02546cf84c77498e startedTime: "2022-01-20T00:31:32Z" state: Completed verified: false version: 4.9.15 # oc -n openshift-monitoring get secret grafana-datasources -o jsonpath="{.data.datasources\.yaml}" | base64 -d { "apiVersion": 1, "datasources": [ { "access": "proxy", "basicAuth": true, "basicAuthPassword": "", "basicAuthUser": "internal", "editable": false, "jsonData": { "tlsSkipVerify": true }, "name": "prometheus", "orgId": 1, "type": "prometheus", "url": "https://prometheus-k8s.openshift-monitoring.svc:9091", "version": 1 } ] }
*** Bug 2043098 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056
I also got the same error, it didn't show up even though I tried to reload the page https://vengeio.online
Thanks for your post, it's very helpful https://drift-hunters.com Spend your free time playing https://phrazle.io the games you love.
People on https://geometrydashunblocked.io need to know about this. Thanks for sharing
É provável que você já tenha tido o prazer de ouvir sobre as air fryer a ar antes. O eletrodoméstico já existe há bastante tempo e agora está ganhando destaque em todos os lares. É uma excelente ferramenta para fazer batatas fritas, nuggets de legumes de frango e costeletas que quase não requerem óleo. https://fornoairfryer.com/
it seems there is issue for the authentication. Version-Release number of selected component (if applicable): 4.9.13 upgrade to 4.10.0-0.nightly-2022-01-05-181126, https://mcdvoice.me/
This comment was flagged a spam, view the edit history to see the original text if required.
Thanks for the update and quick reply. I'll be sure to keep an eye on this thread. https://www.mymilestonecard.top/
Created attachment 1934551 [details] How to tackle the problem of mental illness and homeless Partner with hotels to create more SRO's that staff a SW, and physician to monitor and assist with the mentally ill who reside in the residence. Utilize more mobile crisis units to intervene as needed https://www.dqfansurvey.one/
Comment on attachment 1934551 [details] How to tackle the problem of mental illness and homeless that was a great.
Comment on attachment 1934551 [details] How to tackle the problem of mental illness and homeless That.
Comment on attachment 1934551 [details] How to tackle the problem of mental illness and homeless Thanks for that. https://www.telltims.net/
Comment on attachment 1851829 [details] grafana-pod-logs-on-new-cluster I have same question. https://www.telltims.net/
Sonuç olarak, Traffic Racer APK, yüksek kaliteli grafikler, akıcı oyun ve çeşitli oyun modları ve zorluklar sunan mobil cihazlar için heyecan verici bir trafik yarış oyunudur. https://trafficracer.app/
Best games free online on site https://vex7.io
They must be emotionally and physically strong, and able to be unaffected by what they see, whether in the past or in the future. https://wheelspinner.tools
In reality, I had no idea what was being debated in this forum, which I now know a little bit about. If you want to wear something cool, I recommend adding this nike tiffany jacket(https://www.paragonjackets.com/product/tiffany-and-co-nike-jacket/) to your inventory of newest outfits.