Description of problem: set a non-exist logLevel for thanosQuerier, describe the CrashLoopBackOff pod, Message for "Last State: Terminated" is hanos/tracing.md/#configuration --version Show application version. .... hanos/tracing.md/#configuration should be thanos/tracing.md/#configuration ***************** apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: logLevel: wwwarn ***************** # oc -n openshift-monitoring get pod | grep thanos-querier thanos-querier-684675558b-sjkl4 6/6 Running 0 41m thanos-querier-77d55db899-k6gcv 4/6 CrashLoopBackOff 5 (45s ago) 2m48s thanos-querier-77d55db899-qmdwg 4/6 CrashLoopBackOff 5 (48s ago) 2m48s # oc -n openshift-monitoring logs -c thanos-query thanos-querier-77d55db899-k6gcv error parsing commandline arguments: [/bin/thanos query --grpc-address=127.0.0.1:10901 --http-address=127.0.0.1:9090 --log.format=logfmt --query.replica-label=prometheus_replica --query.replica-label=thanos_ruler_replica --store=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local --query.auto-downsampling --store.sd-dns-resolver=miekgdns --grpc-client-tls-secure --grpc-client-tls-cert=/etc/tls/grpc/client.crt --grpc-client-tls-key=/etc/tls/grpc/client.key --grpc-client-tls-ca=/etc/tls/grpc/ca.crt --grpc-client-server-name=prometheus-grpc --rule=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local --target=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local --store=dnssrv+_grpc._tcp.prometheus-operated.openshift-user-workload-monitoring.svc.cluster.local --store=dnssrv+_grpc._tcp.thanos-ruler-operated.openshift-user-workload-monitoring.svc.cluster.local --rule=dnssrv+_grpc._tcp.prometheus-operated.openshift-user-workload-monitoring.svc.cluster.local --rule=dnssrv+_grpc._tcp.thanos-ruler-operated.openshift-user-workload-monitoring.svc.cluster.local --target=dnssrv+_grpc._tcp.prometheus-operated.openshift-user-workload-monitoring.svc.cluster.local --log.level=wwwarn]: enum value must be one of error,warn,info,debug, got 'wwwarn' usage: thanos query [<flags>] ... # oc -n openshift-monitoring describe pod thanos-querier-77d55db899-k6gcv Name: thanos-querier-77d55db899-k6gcv Namespace: openshift-monitoring Priority: 2000000000 Priority Class Name: system-cluster-critical Node: ip-10-0-189-96.us-east-2.compute.internal/10.0.189.96 Start Time: Thu, 06 Jan 2022 00:20:12 -0500 Labels: app.kubernetes.io/component=query-layer app.kubernetes.io/instance=thanos-querier app.kubernetes.io/managed-by=cluster-monitoring-operator app.kubernetes.io/name=thanos-query app.kubernetes.io/part-of=openshift-monitoring app.kubernetes.io/version=0.23.1 pod-template-hash=77d55db899 Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.129.2.55" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.129.2.55" ], "default": true, "dns": {} }] openshift.io/scc: restricted Status: Running IP: 10.129.2.55 IPs: IP: 10.129.2.55 Controlled By: ReplicaSet/thanos-querier-77d55db899 Containers: thanos-query: Container ID: cri-o://4688133d73e5375fd31741949c73550536c9ff95ff92ec87f59a6ea29ab5dc23 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3836869ebaf548d8a4a6b5416614b49989cdee62ae206fdeaff8bf2e988cab7d Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3836869ebaf548d8a4a6b5416614b49989cdee62ae206fdeaff8bf2e988cab7d Port: 9090/TCP Host Port: 0/TCP Args: query --grpc-address=127.0.0.1:10901 --http-address=127.0.0.1:9090 --log.format=logfmt --query.replica-label=prometheus_replica --query.replica-label=thanos_ruler_replica --store=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local --query.auto-downsampling --store.sd-dns-resolver=miekgdns --grpc-client-tls-secure --grpc-client-tls-cert=/etc/tls/grpc/client.crt --grpc-client-tls-key=/etc/tls/grpc/client.key --grpc-client-tls-ca=/etc/tls/grpc/ca.crt --grpc-client-server-name=prometheus-grpc --rule=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local --target=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local --store=dnssrv+_grpc._tcp.prometheus-operated.openshift-user-workload-monitoring.svc.cluster.local --store=dnssrv+_grpc._tcp.thanos-ruler-operated.openshift-user-workload-monitoring.svc.cluster.local --rule=dnssrv+_grpc._tcp.prometheus-operated.openshift-user-workload-monitoring.svc.cluster.local --rule=dnssrv+_grpc._tcp.thanos-ruler-operated.openshift-user-workload-monitoring.svc.cluster.local --target=dnssrv+_grpc._tcp.prometheus-operated.openshift-user-workload-monitoring.svc.cluster.local --log.level=wwwarn State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Message: hanos/tracing.md/#configuration --version Show application version. --web.disable-cors Whether to disable CORS headers to be set by Thanos. By default Thanos sets CORS headers to be allowed by all. --web.external-prefix="" Static prefix for all HTML links and redirect URLs in the UI query web interface. Actual endpoints are still served on / or the web.route-prefix. This allows thanos UI to be served behind a reverse proxy that strips a URL sub-path. --web.prefix-header="" Name of HTTP request header used for dynamic prefixing of UI links and redirects. This option is ignored if web.external-prefix argument is set. Security risk: enable this option only if a reverse proxy in front of thanos is resetting the header. The --web.prefix-header=X-Forwarded-Prefix option can be useful, for example, if Thanos UI is served via Traefik reverse proxy with PathPrefixStrip option enabled, which sends the stripped prefix value in X-Forwarded-Prefix header. This allows thanos UI to be served on a sub-path. --web.route-prefix="" Prefix for API and UI endpoints. This allows thanos UI to be served on a sub-path. Defaults to the value of --web.external-prefix. This option is analogous to --web.route-prefix of Prometheus. Exit Code: 2 Started: Thu, 06 Jan 2022 00:23:12 -0500 Finished: Thu, 06 Jan 2022 00:23:12 -0500 Ready: False Restart Count: 5 Requests: cpu: 10m memory: 12Mi Environment: HOST_IP_ADDRESS: (v1:status.hostIP) Mounts: /etc/tls/grpc from secret-grpc-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-stlxr (ro) oauth-proxy: Container ID: cri-o://f7e9cf47a7bd7aded5fa642296b1deeed7d6972217696b119617a325529d85c7 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5c3de2ea3d4628746de9fc90a556ca3f03af2f83273cfc4a6934c2d7852fe3c Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5c3de2ea3d4628746de9fc90a556ca3f03af2f83273cfc4a6934c2d7852fe3c Port: 9091/TCP Host Port: 0/TCP Args: -provider=openshift -https-address=:9091 -http-address= -email-domain=* -upstream=http://localhost:9090 -openshift-service-account=thanos-querier -openshift-sar={"resource": "namespaces", "verb": "get"} -openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}} -tls-cert=/etc/tls/private/tls.crt -tls-key=/etc/tls/private/tls.key -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token -cookie-secret-file=/etc/proxy/secrets/session_secret -openshift-ca=/etc/pki/tls/cert.pem -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt -bypass-auth-for=^/-/(healthy|ready)$ -htpasswd-file=/etc/proxy/htpasswd/auth State: Running Started: Thu, 06 Jan 2022 00:24:13 -0500 Last State: Terminated Reason: Error Message: erseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:23:22 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:23:24 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:23:27 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:23:32 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:23:35 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:23:37 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:23:42 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:23:42 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:23:47 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:23:48 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:23:52 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:23:57 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:24:02 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:24:03 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:24:07 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:24:12 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:24:12 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused 2022/01/06 05:24:12 reverseproxy.go:490: http: proxy error: dial tcp [::1]:9090: connect: connection refused Exit Code: 2 Started: Thu, 06 Jan 2022 00:22:13 -0500 Finished: Thu, 06 Jan 2022 00:24:12 -0500 Ready: False Restart Count: 2 Requests: cpu: 1m memory: 20Mi Liveness: http-get https://:9091/-/healthy delay=5s timeout=1s period=30s #success=1 #failure=4 Readiness: http-get https://:9091/-/ready delay=5s timeout=1s period=5s #success=1 #failure=20 Environment: HTTP_PROXY: HTTPS_PROXY: NO_PROXY: Mounts: /etc/pki/ca-trust/extracted/pem/ from thanos-querier-trusted-ca-bundle (ro) /etc/proxy/htpasswd from secret-thanos-querier-oauth-htpasswd (rw) /etc/proxy/secrets from secret-thanos-querier-oauth-cookie (rw) /etc/tls/private from secret-thanos-querier-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-stlxr (ro) kube-rbac-proxy: Container ID: cri-o://f5ef2a3cfa08b95040f19a9d2288198fc525bb69da7eaa38ae5df53e7ca8615c Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ad4de366563e3175887872d54a03d4e6d79195ea011f28510118a78e3d4d0269 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ad4de366563e3175887872d54a03d4e6d79195ea011f28510118a78e3d4d0269 Port: 9092/TCP Host Port: 0/TCP Args: --secure-listen-address=0.0.0.0:9092 --upstream=http://127.0.0.1:9095 --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logtostderr=true --allow-paths=/api/v1/query,/api/v1/query_range,/api/v1/labels,/api/v1/label/*/values,/api/v1/series --tls-min-version=VersionTLS12 State: Running Started: Thu, 06 Jan 2022 00:20:15 -0500 Ready: True Restart Count: 0 Requests: cpu: 1m memory: 15Mi Environment: <none> Mounts: /etc/kube-rbac-proxy from secret-thanos-querier-kube-rbac-proxy (rw) /etc/tls/private from secret-thanos-querier-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-stlxr (ro) prom-label-proxy: Container ID: cri-o://0a5797f7569351b6714dc4a946d19e311e91c8fd0b759597d9675013a8a5e6c6 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0460ba92433db720fc969740db2cc261dc2edad551ddcff047043a16f034138 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0460ba92433db720fc969740db2cc261dc2edad551ddcff047043a16f034138 Port: <none> Host Port: <none> Args: --insecure-listen-address=127.0.0.1:9095 --upstream=http://127.0.0.1:9090 --label=namespace --enable-label-apis --error-on-replace State: Running Started: Thu, 06 Jan 2022 00:20:15 -0500 Ready: True Restart Count: 0 Requests: cpu: 1m memory: 15Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-stlxr (ro) kube-rbac-proxy-rules: Container ID: cri-o://9a8746b8980609449d87ebcbc90ad388d9cb12957e774faabdd2d6befe2cb5e6 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ad4de366563e3175887872d54a03d4e6d79195ea011f28510118a78e3d4d0269 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ad4de366563e3175887872d54a03d4e6d79195ea011f28510118a78e3d4d0269 Port: 9093/TCP Host Port: 0/TCP Args: --secure-listen-address=0.0.0.0:9093 --upstream=http://127.0.0.1:9095 --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logtostderr=true --allow-paths=/api/v1/rules --tls-min-version=VersionTLS12 State: Running Started: Thu, 06 Jan 2022 00:20:15 -0500 Ready: True Restart Count: 0 Requests: cpu: 1m memory: 15Mi Environment: <none> Mounts: /etc/kube-rbac-proxy from secret-thanos-querier-kube-rbac-proxy-rules (rw) /etc/tls/private from secret-thanos-querier-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-stlxr (ro) kube-rbac-proxy-metrics: Container ID: cri-o://d53fe93148186433158ba6f2968b554892bf7d2be95bf801503d3033dfd7ade4 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ad4de366563e3175887872d54a03d4e6d79195ea011f28510118a78e3d4d0269 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ad4de366563e3175887872d54a03d4e6d79195ea011f28510118a78e3d4d0269 Port: 9094/TCP Host Port: 0/TCP Args: --secure-listen-address=0.0.0.0:9094 --upstream=http://127.0.0.1:9090 --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --client-ca-file=/etc/tls/client/client-ca.crt --logtostderr=true --allow-paths=/metrics --tls-min-version=VersionTLS12 State: Running Started: Thu, 06 Jan 2022 00:20:16 -0500 Ready: True Restart Count: 0 Requests: cpu: 1m memory: 15Mi Environment: <none> Mounts: /etc/kube-rbac-proxy from secret-thanos-querier-kube-rbac-proxy-metrics (rw) /etc/tls/client from metrics-client-ca (ro) /etc/tls/private from secret-thanos-querier-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-stlxr (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: secret-thanos-querier-tls: Type: Secret (a volume populated by a Secret) SecretName: thanos-querier-tls Optional: false secret-thanos-querier-oauth-cookie: Type: Secret (a volume populated by a Secret) SecretName: thanos-querier-oauth-cookie Optional: false secret-thanos-querier-kube-rbac-proxy: Type: Secret (a volume populated by a Secret) SecretName: thanos-querier-kube-rbac-proxy Optional: false secret-thanos-querier-kube-rbac-proxy-rules: Type: Secret (a volume populated by a Secret) SecretName: thanos-querier-kube-rbac-proxy-rules Optional: false secret-thanos-querier-kube-rbac-proxy-metrics: Type: Secret (a volume populated by a Secret) SecretName: thanos-querier-kube-rbac-proxy-metrics Optional: false metrics-client-ca: Type: ConfigMap (a volume populated by a ConfigMap) Name: metrics-client-ca Optional: false thanos-querier-trusted-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: thanos-querier-trusted-ca-bundle-2rsonso43rc5p Optional: true secret-thanos-querier-oauth-htpasswd: Type: Secret (a volume populated by a Secret) SecretName: thanos-querier-oauth-htpasswd Optional: false secret-grpc-tls: Type: Secret (a volume populated by a Secret) SecretName: thanos-querier-grpc-tls-67ig5vd9rbp71 Optional: false kube-api-access-stlxr: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 3m40s default-scheduler 0/6 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Normal Scheduled 3m38s default-scheduler Successfully assigned openshift-monitoring/thanos-querier-77d55db899-k6gcv to ip-10-0-189-96.us-east-2.compute.internal Normal Pulled 3m36s kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ad4de366563e3175887872d54a03d4e6d79195ea011f28510118a78e3d4d0269" already present on machine Normal Pulled 3m36s kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ad4de366563e3175887872d54a03d4e6d79195ea011f28510118a78e3d4d0269" already present on machine Normal Started 3m36s kubelet Started container kube-rbac-proxy-rules Normal Created 3m36s kubelet Created container kube-rbac-proxy-rules Normal Pulled 3m36s kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5c3de2ea3d4628746de9fc90a556ca3f03af2f83273cfc4a6934c2d7852fe3c" already present on machine Normal Created 3m36s kubelet Created container oauth-proxy Normal Started 3m36s kubelet Started container oauth-proxy Normal Pulled 3m36s kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ad4de366563e3175887872d54a03d4e6d79195ea011f28510118a78e3d4d0269" already present on machine Normal Created 3m36s kubelet Created container kube-rbac-proxy Normal Started 3m36s kubelet Started container kube-rbac-proxy Normal AddedInterface 3m36s multus Add eth0 [10.129.2.55/23] from openshift-sdn Normal Created 3m36s kubelet Created container prom-label-proxy Normal Started 3m36s kubelet Started container prom-label-proxy Normal Pulled 3m36s kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0460ba92433db720fc969740db2cc261dc2edad551ddcff047043a16f034138" already present on machine Normal Started 3m35s (x2 over 3m36s) kubelet Started container thanos-query Normal Created 3m35s (x2 over 3m36s) kubelet Created container thanos-query Normal Pulled 3m35s (x2 over 3m36s) kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3836869ebaf548d8a4a6b5416614b49989cdee62ae206fdeaff8bf2e988cab7d" already present on machine Normal Created 3m35s kubelet Created container kube-rbac-proxy-metrics Normal Started 3m35s kubelet Started container kube-rbac-proxy-metrics Warning BackOff 3m33s (x2 over 3m34s) kubelet Back-off restarting failed container Warning ProbeError 3m29s kubelet Readiness probe error: HTTP probe failed with statuscode: 502 body: Warning Unhealthy 3m29s kubelet Readiness probe failed: HTTP probe failed with statuscode: 502 Version-Release number of selected component (if applicable): 4.10.0-0.nightly-2022-01-05-181126 How reproducible: always Steps to Reproduce: 1. see the description 2. 3. Actual results: Expected results: Additional info:
It seems the info in the description message is truncated, so we see Message: hanos/tracing.md/#configuration thanos query help info see: # oc -n openshift-monitoring exec -c thanos-query thanos-querier-684675558b-sjkl4 -- thanos query --help ... --tracing.config-file=<file-path> Path to YAML file with tracing configuration. See format details: https://thanos.io/tip/thanos/tracing.md/#configuration --version Show application version. --web.disable-cors Whether to disable CORS headers to be set by Thanos. By default Thanos sets CORS headers to be allowed by all. --web.external-prefix="" Static prefix for all HTML links and redirect URLs in the UI query web interface. Actual endpoints are still served on / or the web.route-prefix. This allows thanos UI to be served behind ..
We use "terminationMessagePolicy: FallbackToLogsOnError" for the containers in question. This means stderr is used as the termination message. However "The log output is limited to 2048 bytes or 80 lines, whichever is smaller." We could fix this by manually setting messages or manually truncating the error message differently, however I think this is not worth the effort. I'll propose a patch to thanos to not print the full help message when argument parsing fails, though if upstream doesn't accept it I'd lean towards closing this as WONTFIX.
Fixed in https://github.com/openshift/thanos/pull/78 via https://github.com/thanos-io/thanos/pull/5034
4.11.0-0.nightly-2022-03-14-113722, thanos version 0.25.0, followed the setting in Comment 0, 1. describe the CrashLoopBackOff thanos-querier pod, the error size now truncated to a smaller size compared with before, and would see the error info. 2. would see the help info in thanos-query container logs. see from the attached file
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069