Bug 2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10
Summary: 403 Forbidden error shows for all the graphs in each grafana dashboard after ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.10
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.10.0
Assignee: Prashant Balachandran
QA Contact: Junqi Zhao
URL:
Whiteboard:
: 2043098 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-01-06 18:45 UTC by Junqi Zhao
Modified: 2024-05-03 12:49 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-10 16:37:33 UTC
Target Upstream Version:
Embargoed:
juzhao: needinfo-
juzhao: needinfo-
juzhao: needinfo-


Attachments (Terms of Use)
4.9.13 dashborad, no issue for the graph (173.19 KB, image/png)
2022-01-06 18:45 UTC, Junqi Zhao
no flags Details
grafana-proxy-logs-on-power (8.95 KB, text/plain)
2022-01-18 11:22 UTC, Julie
no flags Details
grafana-pod-logs-on-new-cluster (5.28 KB, text/plain)
2022-01-19 09:05 UTC, Julie
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-monitoring-operator pull 1533 0 None open Bug 2037891: Changing the grafana data source secret to be updatable. 2022-01-10 10:08:56 UTC
Github openshift cluster-monitoring-operator pull 1539 0 None open Bug 2037891: Reverting the secureJsonData change for the grafana password 2022-01-19 07:35:49 UTC
Red Hat Product Errata RHSA-2022:0056 0 None None None 2022-03-10 16:37:45 UTC

Description Junqi Zhao 2022-01-06 18:45:28 UTC
Created attachment 1849321 [details]
4.9.13 dashborad, no issue for the graph

Description of problem:
4.9.13, login grafana route, all graphs show correctly for each dashboard, upgrade to 4.10.0-0.nightly-2022-01-05-181126, 403 Forbidden error shows for all the graphs in each dashboard.
and checked in a fresh 4.10.0-0.nightly-2022-01-05-181126 cluster, no such error, the error is only with 4.9 upgrade to 4.10

NOTE: all the attached pictures take etcd dashboard as examples
# oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.13    True        False         10m     Cluster version is 4.9.13

# oc -n openshift-monitoring get pod | grep grafana
grafana-79f8447cbb-vgwf8                       2/2     Running   0             23m

no errors in the grafana/grafana-proxy containers

upgrade from 4.9.13 to 4.10.0-0.nightly-2022-01-05-181126
# oc adm upgrade --to-image=registry.ci.openshift.org/ocp/release:4.10.0-0.nightly-2022-01-05-181126 --force=true --allow-explicit-upgrade=true

# oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.10.0-0.nightly-2022-01-05-181126   True        False         5m      Cluster version is 4.10.0-0.nightly-2022-01-05-181126


# oc -n openshift-monitoring get pod | grep grafana
grafana-5947fc4ffd-ktv9l                      3/3     Running   0          22m

403 error in the grafana container, full log see from the must-gather
# oc -n openshift-monitoring logs grafana-5947fc4ffd-ktv9l -c grafana
...
t=2022-01-06T18:06:57+0000 lvl=warn msg="[Deprecated] the use of basicAuthPassword field is deprecated. Please use secureJsonData.basicAuthPassword" logger=provisioning.datasources datasource name=prometheus
t=2022-01-06T18:06:57+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=prometheus uid=
t=2022-01-06T18:06:57+0000 lvl=eror msg="Failed to read plugin provisioning files from directory" logger=provisioning.plugins path=/etc/grafana/provisioning/plugins error="open /etc/grafana/provisioning/plugins: no such file or directory"
t=2022-01-06T18:06:57+0000 lvl=eror msg="Can't read alert notification provisioning files from directory" logger=provisioning.notifiers path=/etc/grafana/provisioning/notifiers error="open /etc/grafana/provisioning/notifiers: no such file or directory"
t=2022-01-06T18:06:57+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=127.0.0.1:3001 protocol=http subUrl= socket=
t=2022-01-06T18:07:07+0000 lvl=info msg="Request Completed" logger=context userId=2 orgId=1 uname=kube:admin method=GET path=/api/datasources/proxy/1/api/v1/series status=403 remote_addr="209.132.188.14, 10.131.0.43" time_ms=222 size=85975 referer="https://grafana-openshift-monitoring.apps.juzhao-49.qe.devcluster.openshift.com/d/c2f4e12cdf69feb95caa41a5a1b423d9/etcd?orgId=1&refresh=10s"
t=2022-01-06T18:07:21+0000 lvl=info msg="Request Completed" logger=context userId=2 orgId=1 uname=kube:admin method=GET path=/api/datasources/proxy/1/api/v1/series status=403 remote_addr="209.132.188.14, 10.129.2.12" time_ms=8 size=85975 referer="https://grafana-openshift-monitoring.apps.juzhao-49.qe.devcluster.openshift.com/d/c2f4e12cdf69feb95caa41a5a1b423d9/etcd?orgId=1&refresh=10s"
t=2022-01-06T18:07:24+0000 lvl=info msg="Request Completed" logger=context userId=2 orgId=1 uname=kube:admin method=GET path=/api/datasources/proxy/1/api/v1/query_range status=403 remote_addr="209.132.188.14, 10.129.2.12" time_ms=2 size=86051 referer="https://grafana-openshift-monitoring.apps.juzhao-49.qe.devcluster.openshift.com/d/c2f4e12cdf69feb95caa41a5a1b423d9/etcd?orgId=1&refresh=10s"
t=2022-01-06T18:07:24+0000 lvl=info msg="Request Completed" logger=context userId=2 orgId=1 uname=kube:admin method=GET path=/api/datasources/proxy/1/api/v1/query_range status=403 remote_addr="209.132.188.14, 10.129.2.12" time_ms=3 size=86139 referer="https://grafana-openshift-monitoring.apps.juzhao-49.qe.devcluster.openshift.com/d/c2f4e12cdf69feb95caa41a5a1b423d9/etcd?orgId=1&refresh=10s"
t=2022-01-06T18:07:25+0000 lvl=info msg="Request Completed" logger=context userId=2 orgId=1 uname=kube:admin method=GET path=/api/datasources/proxy/1/api/v1/query_range status=403 remote_addr="209.132.188.14, 10.129.2.12" time_ms=2 size=86389 referer="https://grafana-openshift-monitoring.apps.juzhao-49.qe.devcluster.openshift.com/d/c2f4e12cdf69feb95caa41a5a1b423d9/etcd?orgId=1&refresh=10s"

no error in grafana-proxy
# oc -n openshift-monitoring logs grafana-5947fc4ffd-ktv9l -c grafana-proxy
2022/01/06 18:06:55 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:grafana
2022/01/06 18:06:55 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token
2022/01/06 18:06:55 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.
2022/01/06 18:06:58 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:3001/"
2022/01/06 18:06:58 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:grafana
2022/01/06 18:06:58 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled
I0106 18:06:58.579219       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key
2022/01/06 18:06:58 http.go:107: HTTPS: listening on [::]:3000

login grafana route, check dashboard, find 403 error for all graphs, from the attached picture, besides the 403 API error, move the mouse to the "!" in the graph, shows the following info 
Templating
Template variable service failed <!DOCTYPE html> <html lang="en" charset="utf-8"> <head> <title>Log In</title> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no"> <style> @font-face { font-family: "Open Sans"; src: url(data:application/x-font-woff;charset=utf-
....

it seems there is issue for the authentication.

Version-Release number of selected component (if applicable):
4.9.13 upgrade to 4.10.0-0.nightly-2022-01-05-181126,
NOTE: 4.9.13 uses Grafana 7.5.5,  4.10 uses 7.5.11

How reproducible:
always

Steps to Reproduce:
1. 4.9.13, login grafana route, check the graphs are normal for each dashboard
2. upgrade to 4.10.0-0.nightly-2022-01-05-181126, check the graphs again
3.

Actual results:
all graphs in each dashboard is 403 Forbidden error after upgrade to 4.10

Expected results:
no error for the dashboard

Additional info:

Comment 4 Simon Pasquier 2022-01-07 12:08:23 UTC
Marked as blocker+ since it's a regression and the Grafana service isn't available anymore.

Comment 6 Simon Pasquier 2022-01-07 13:28:45 UTC
@Junqi have you tried to refresh the page. It might be because the grafana pod has restarted during the upgrade and has lost its local data?

Comment 7 Prashant Balachandran 2022-01-07 13:48:16 UTC
Template variable service failed <!DOCTYPE html> <html lang="en" charset="utf-8"> <head> <title>Log In</title> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no"> <style> @font-face { font-family: "Open Sans"; src: url(data:application/x-font-woff;charset=utf-


Based on this log, it seems like an error from the oauth proxy. I will try and reproduce the error.

Comment 8 Junqi Zhao 2022-01-10 01:24:02 UTC
(In reply to Simon Pasquier from comment #6)
> @Junqi have you tried to refresh the page. It might be because the grafana
> pod has restarted during the upgrade and has lost its local data?

refresh doesn't help, still 403 error

Comment 9 Prashant Balachandran 2022-01-10 06:39:22 UTC
The error is reproducible and there are error logs in the prometheus oauth-proxy. Working on figuring out the root cause.

Comment 11 Junqi Zhao 2022-01-11 07:57:45 UTC
upgrade from 4.9.13 to 4.10.0-0.nightly-2022-01-11-014938, after upgrade, grafana dashboards can show data for the graphs

Comment 13 Julie 2022-01-17 14:11:22 UTC
This problem is reproduced on Power platform with build: https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp-dev-preview/4.10.0-fc.1/, on upgrading from OCP 4.9.15.
garafana UI does not show any data, shows "Forbidden".


grafana pod did NOT restart

root@zsnxt-2760-bastion-0 ~]# oc get pods -n openshift-monitoring
NAME                                          READY   STATUS    RESTARTS        AGE
alertmanager-main-0                           6/6     Running   0               4h51m
alertmanager-main-1                           6/6     Running   0               4h44m
cluster-monitoring-operator-96d8ffc66-dn28x   2/2     Running   0               4h31m
grafana-85896bbc5d-cc7mz                      3/3     Running   0               4h44m
kube-state-metrics-84f498c4d5-gf54c           3/3     Running   1 (4h44m ago)   4h44m
node-exporter-8zrzz                           2/2     Running   2               5h12m
node-exporter-b6lvf                           2/2     Running   2               5h13m
node-exporter-kvxw5                           2/2     Running   2               5h12m
node-exporter-spmgq                           2/2     Running   2               5h11m
node-exporter-x5wxz                           2/2     Running   2               5h12m
openshift-state-metrics-58d99989b4-bl28b      3/3     Running   0               4h44m
prometheus-adapter-f8848d5cc-m9fjp            1/1     Running   0               30m
prometheus-adapter-f8848d5cc-v6qv2            1/1     Running   0               30m
prometheus-k8s-0                              6/6     Running   0               4h51m
prometheus-k8s-1                              6/6     Running   0               4h44m
prometheus-operator-7c7dc7d876-rftgl          2/2     Running   1 (4h29m ago)   4h31m
telemeter-client-7bd665c9dc-t456b             3/3     Running   0               4h44m
thanos-querier-8485d999d4-b42d2               6/6     Running   0               4h44m
thanos-querier-8485d999d4-m4g5j               6/6     Running   0               4h51m

Comment 14 Prashant Balachandran 2022-01-18 06:18:29 UTC
Hello Julie, would you be able to provide logs from the prometheus oauth-proxy?

Comment 15 Junqi Zhao 2022-01-18 08:34:40 UTC
upgraded from 4.9.15 to 4.10.0-fc.1, no error for grafana, I suggest we close this bug and open one new bug for ppcle64 cluster
# oc get clusterversion version -oyaml
...
    history:
    - completionTime: "2022-01-18T08:04:33Z"
      image: registry.ci.openshift.org/ocp/release@sha256:9f3ac86ba907abba3ffbae580433218eef3f1934c3353caf331587ac7c450ff0
      startedTime: "2022-01-18T07:02:38Z"
      state: Completed
      verified: true
      version: 4.10.0-fc.1
    - completionTime: "2022-01-18T06:38:53Z"
      image: quay.io/openshift-release-dev/ocp-release@sha256:bb1987fb718f81fb30bec4e0e1cd5772945269b77006576b02546cf84c77498e
      startedTime: "2022-01-18T06:20:17Z"
      state: Completed
      verified: false
      version: 4.9.15
# oc -n openshift-monitoring logs -c grafana-proxy grafana-6857495cf4-nk4m7
2022/01/18 07:49:23 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:grafana
2022/01/18 07:49:23 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token
2022/01/18 07:49:23 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.
2022/01/18 07:49:30 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:3001/"
2022/01/18 07:49:30 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:grafana
2022/01/18 07:49:30 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled
2022/01/18 07:49:30 http.go:107: HTTPS: listening on [::]:3000
I0118 07:49:30.606074       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key

Comment 17 Junqi Zhao 2022-01-18 08:40:10 UTC
(In reply to Junqi Zhao from comment #15)
> upgraded from 4.9.15 to 4.10.0-fc.1, no error for grafana

upgraded from 4.9.15 to 4.10.0-fc.1 in AWS cluster, no error for grafana

Comment 18 Julie 2022-01-18 11:21:23 UTC
(In reply to Prashant Balachandran from comment #14)
> Hello Julie, would you be able to provide logs from the prometheus
> oauth-proxy?


Logs from grafana-proxy is attached here.

Comment 19 Julie 2022-01-18 11:22:35 UTC
Created attachment 1851548 [details]
grafana-proxy-logs-on-power

Comment 20 Prashant Balachandran 2022-01-18 11:55:22 UTC
Can you provide the must gather for this cluster? I tried on AWS and it is not reproducible.

Comment 21 Julie 2022-01-19 09:04:59 UTC
(In reply to Prashant Balachandran from comment #20)
> Can you provide the must gather for this cluster? I tried on AWS and it is
> not reproducible.

We lost that cluster unfortunately. 
Deployed a fresh new 4.9.15 cluster on the same Power test environment, and got it upgraded to 4.10.fc1 build.
Grafana dashboard is showing data, and graphs are visible now. NOT able to reproduce the issue.

Anyway, I am attaching all the relevant data here (in case you want to compare the pod logs on this new cluster with that of old one).

must-gather logs:
https://drive.google.com/drive/folders/1L-zmdZ0Pq-GOEjO6tRiTKaR-WjaEblIJ?usp=sharing


[root@varad-9826-bastion-0 e2e_tests_results]# oc version
Client Version: 4.9.15
Server Version: 4.10.0-fc.1
Kubernetes Version: v1.23.0+50f645e

[root@varad-9826-bastion-0 ~]# oc get pods -n openshift-monitoring

NAME                                          READY   STATUS    RESTARTS   AGE
alertmanager-main-0                           6/6     Running   0          12h
alertmanager-main-1                           6/6     Running   0          12h
cluster-monitoring-operator-96d8ffc66-p85lg   2/2     Running   0          12h
grafana-d588df7db-jbmgf                       3/3     Running   0          12h
kube-state-metrics-84f498c4d5-vrqlf           3/3     Running   0          12h
node-exporter-5b8l2                           2/2     Running   2          12h
node-exporter-p4lp2                           2/2     Running   2          12h
node-exporter-qtbhb                           2/2     Running   2          12h
node-exporter-txzm6                           2/2     Running   2          12h
node-exporter-zmvs4                           2/2     Running   2          12h
openshift-state-metrics-58d99989b4-q8bjw      3/3     Running   0          12h
prometheus-adapter-b5b84b88f-hjs7m            1/1     Running   0          117m
prometheus-adapter-b5b84b88f-tw57t            1/1     Running   0          117m
prometheus-k8s-0                              6/6     Running   0          12h
prometheus-k8s-1                              6/6     Running   0          12h
prometheus-operator-7c7dc7d876-plz9n          2/2     Running   0          12h
telemeter-client-7d849bcff4-xs589             3/3     Running   0          12h
thanos-querier-7bd4d5f698-6zdwt               6/6     Running   0          12h
thanos-querier-7bd4d5f698-rt9qf               6/6     Running   0          12h

grafana-pod-logs-on-new-cluster  is attached here.

Comment 22 Julie 2022-01-19 09:05:57 UTC
Created attachment 1851829 [details]
grafana-pod-logs-on-new-cluster

Comment 24 Junqi Zhao 2022-01-20 02:36:38 UTC
the fix is in 4.10.0-0.nightly-2022-01-19-212639 and later builds, upgrade from 4.9.15 to 4.10.0-0.nightly-2022-01-19-212639, no error for grafana dashboard
also checked in a fresh 4.10.0-0.nightly-2022-01-19-212639 cluster, no error for grafana dashboard either
# oc get clusterversion -oyaml
...
    history:
    - completionTime: "2022-01-20T02:15:49Z"
      image: registry.ci.openshift.org/ocp/release@sha256:9633ec18f1ab43dd3c02d391db0f178deb698b5e708222089d063b181eb7add4
      startedTime: "2022-01-20T01:11:04Z"
      state: Completed
      verified: false
      version: 4.10.0-0.nightly-2022-01-19-212639
    - completionTime: "2022-01-20T00:56:49Z"
      image: quay.io/openshift-release-dev/ocp-release@sha256:bb1987fb718f81fb30bec4e0e1cd5772945269b77006576b02546cf84c77498e
      startedTime: "2022-01-20T00:31:32Z"
      state: Completed
      verified: false
      version: 4.9.15


# oc -n openshift-monitoring get secret grafana-datasources -o jsonpath="{.data.datasources\.yaml}" | base64 -d
{
    "apiVersion": 1,
    "datasources": [
        {
            "access": "proxy",
            "basicAuth": true,
            "basicAuthPassword": "",
            "basicAuthUser": "internal",
            "editable": false,
            "jsonData": {
                "tlsSkipVerify": true
            },
            "name": "prometheus",
            "orgId": 1,
            "type": "prometheus",
            "url": "https://prometheus-k8s.openshift-monitoring.svc:9091",
            "version": 1
        }
    ]
}

Comment 25 Simon Pasquier 2022-01-20 16:24:12 UTC
*** Bug 2043098 has been marked as a duplicate of this bug. ***

Comment 28 errata-xmlrpc 2022-03-10 16:37:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0056

Comment 29 Mira Sherron 2022-09-12 09:40:36 UTC Comment hidden (spam)
Comment 30 donna1205 2022-09-30 01:41:47 UTC Comment hidden (spam)
Comment 31 henowat 2022-10-12 07:32:57 UTC Comment hidden (spam)
Comment 32 Air Fryer 2022-11-11 09:14:11 UTC Comment hidden (spam)
Comment 33 kenna178015crook 2022-12-12 08:45:08 UTC Comment hidden (spam)
Comment 34 nazi.farhadi3171 2022-12-26 05:05:46 UTC Comment hidden (spam)
Comment 35 jenny 2022-12-26 09:13:31 UTC Comment hidden (spam)
Comment 36 farid.jamili4118 2022-12-27 05:03:32 UTC Comment hidden (spam)
Comment 37 Eboni Booker 2022-12-27 05:12:14 UTC Comment hidden (spam)
Comment 38 Eboni Booker 2022-12-27 05:14:29 UTC Comment hidden (spam)
Comment 39 Eboni Booker 2022-12-27 05:15:12 UTC Comment hidden (spam)
Comment 40 melindaetinw81 2022-12-27 05:19:00 UTC Comment hidden (spam)
Comment 41 Edith Stewart 2022-12-27 06:30:47 UTC Comment hidden (spam)
Comment 42 Edith Stewart 2022-12-27 06:32:31 UTC Comment hidden (spam)
Comment 43 jenny 2022-12-27 06:38:56 UTC Comment hidden (spam)
Comment 44 Air Fryer 2023-03-02 05:42:46 UTC Comment hidden (spam)
Comment 45 evelynmurphy 2023-03-07 08:24:35 UTC Comment hidden (spam)
Comment 46 panda78 2023-03-15 09:36:41 UTC Comment hidden (spam)
Comment 47 stellajonnes 2023-03-23 10:03:24 UTC Comment hidden (spam)
Comment 48 kkooo 2023-03-28 06:16:02 UTC Comment hidden (spam)
Comment 49 tracyberge 2023-04-12 07:57:28 UTC Comment hidden (spam)
Comment 50 totogorae 2023-04-27 07:18:10 UTC Comment hidden (spam)
Comment 51 totogorae 2023-04-27 07:19:15 UTC Comment hidden (spam)
Comment 52 totogorae 2023-04-27 07:19:29 UTC Comment hidden (spam)
Comment 53 totogorae 2023-04-27 07:19:42 UTC Comment hidden (spam)
Comment 54 totogorae 2023-04-27 07:20:00 UTC Comment hidden (spam)
Comment 55 totogorae 2023-04-27 07:20:15 UTC Comment hidden (spam)
Comment 56 totogorae 2023-04-27 07:20:29 UTC Comment hidden (spam)
Comment 57 totogorae 2023-04-27 07:20:43 UTC Comment hidden (spam)
Comment 58 totogorae 2023-04-27 07:21:14 UTC Comment hidden (spam)
Comment 59 totogorae 2023-04-27 07:22:48 UTC Comment hidden (spam)
Comment 60 totogorae 2023-04-27 07:23:05 UTC Comment hidden (spam)
Comment 61 totogorae 2023-04-27 07:23:20 UTC Comment hidden (spam)
Comment 62 shawnkemp 2023-05-10 12:31:31 UTC Comment hidden (spam)
Comment 63 LavishJackets 2023-05-12 15:39:41 UTC Comment hidden (spam)
Comment 64 robert jace 2023-05-31 10:36:06 UTC Comment hidden (spam)
Comment 65 noweye1216 2023-06-03 05:43:04 UTC Comment hidden (spam)
Comment 66 beer 2023-07-04 03:33:21 UTC Comment hidden (spam)
Comment 67 mellisajow 2023-08-01 06:18:36 UTC Comment hidden (spam)
Comment 68 mellisajow 2023-08-01 06:22:10 UTC Comment hidden (spam)
Comment 69 vesert 2023-09-11 04:28:18 UTC Comment hidden (spam)
Comment 70 Tunisha 2023-10-23 14:58:15 UTC Comment hidden (spam)
Comment 71 Bruce Wayne 2023-10-30 10:57:33 UTC Comment hidden (spam)
Comment 72 rachii090 2023-11-11 14:16:20 UTC Comment hidden (spam)
Comment 73 eComFist 2023-12-15 07:36:12 UTC Comment hidden (spam)
Comment 74 peter shawn 2024-02-21 07:09:44 UTC Comment hidden (spam)
Comment 75 peter shawn 2024-02-21 10:04:11 UTC Comment hidden (spam)
Comment 76 peter shawn 2024-02-21 10:32:45 UTC Comment hidden (spam)
Comment 77 peter shawn 2024-02-21 10:44:47 UTC Comment hidden (spam)
Comment 78 polinkuer12 2024-03-25 10:42:40 UTC Comment hidden (spam)
Comment 79 robert1997b 2024-05-03 12:49:03 UTC Comment hidden (spam)

Note You need to log in before you can comment on or make changes to this bug.