Description of problem: cluster-monitoring-operator container logs show [beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead # oc -n openshift-monitoring logs $(oc -n openshift-monitoring get po | grep cluster-monitoring-operator | awk '{print $1}') -c cluster-monitoring-operator | grep "beta.kubernetes.io/os" W0819 10:13:13.993849 1 warnings.go:70] spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead W0819 10:19:19.223029 1 warnings.go:70] spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead W0819 10:19:20.826225 1 warnings.go:70] spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead W0819 13:23:31.641396 1 warnings.go:70] spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead W0819 13:28:38.732359 1 warnings.go:70] spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead W0819 13:29:28.979494 1 warnings.go:70] spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead # oc -n openshift-monitoring get deploy cluster-monitoring-operator -oyaml | grep nodeSelector -A4 nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: "" priorityClassName: system-cluster-critical restartPolicy: Always "beta.kubernetes.io/os" and "kubernetes.io/os" are exist in each node # oc get no --show-labels NAME STATUS ROLES AGE VERSION LABELS ip-10-0-141-153.us-east-2.compute.internal Ready master 36m v1.22.0-rc.0+f967870 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-2,failure-domain.beta.kubernetes.io/zone=us-east-2a,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-0-141-153.us-east-2.compute.internal,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.kubernetes.io/instance-type=m5.xlarge,node.openshift.io/os_id=rhcos,topology.ebs.csi.aws.com/zone=us-east-2a,topology.kubernetes.io/region=us-east-2,topology.kubernetes.io/zone=us-east-2a ip-10-0-157-251.us-east-2.compute.internal Ready worker 30m v1.22.0-rc.0+f967870 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-2,failure-domain.beta.kubernetes.io/zone=us-east-2a,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-0-157-251.us-east-2.compute.internal,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.kubernetes.io/instance-type=m5.xlarge,node.openshift.io/os_id=rhcos,topology.ebs.csi.aws.com/zone=us-east-2a,topology.kubernetes.io/region=us-east-2,topology.kubernetes.io/zone=us-east-2a ip-10-0-168-159.us-east-2.compute.internal Ready master 36m v1.22.0-rc.0+f967870 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-2,failure-domain.beta.kubernetes.io/zone=us-east-2b,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-0-168-159.us-east-2.compute.internal,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.kubernetes.io/instance-type=m5.xlarge,node.openshift.io/os_id=rhcos,topology.ebs.csi.aws.com/zone=us-east-2b,topology.kubernetes.io/region=us-east-2,topology.kubernetes.io/zone=us-east-2b ip-10-0-183-207.us-east-2.compute.internal Ready worker 31m v1.22.0-rc.0+f967870 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-2,failure-domain.beta.kubernetes.io/zone=us-east-2b,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-0-183-207.us-east-2.compute.internal,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.kubernetes.io/instance-type=m5.xlarge,node.openshift.io/os_id=rhcos,topology.ebs.csi.aws.com/zone=us-east-2b,topology.kubernetes.io/region=us-east-2,topology.kubernetes.io/zone=us-east-2b ip-10-0-201-133.us-east-2.compute.internal Ready worker 31m v1.22.0-rc.0+f967870 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-2,failure-domain.beta.kubernetes.io/zone=us-east-2c,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-0-201-133.us-east-2.compute.internal,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.kubernetes.io/instance-type=m5.xlarge,node.openshift.io/os_id=rhcos,topology.ebs.csi.aws.com/zone=us-east-2c,topology.kubernetes.io/region=us-east-2,topology.kubernetes.io/zone=us-east-2c ip-10-0-220-27.us-east-2.compute.internal Ready master 36m v1.22.0-rc.0+f967870 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-2,failure-domain.beta.kubernetes.io/zone=us-east-2c,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-0-220-27.us-east-2.compute.internal,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.kubernetes.io/instance-type=m5.xlarge,node.openshift.io/os_id=rhcos,topology.ebs.csi.aws.com/zone=us-east-2c,topology.kubernetes.io/region=us-east-2,topology.kubernetes.io/zone=us-east-2c Version-Release number of selected component (if applicable): 4.9.0-0.nightly-2021-08-18-144658 How reproducible: always Steps to Reproduce: 1. see the description 2. 3. Actual results: Expected results: Additional info:
$ grep -r beta.kubernetes.io/os assets/ manifests/ assets/grafana/deployment.yaml: beta.kubernetes.io/os: linux assets/telemeter-client/deployment.yaml: beta.kubernetes.io/os: linux assets/thanos-querier/deployment.yaml: beta.kubernetes.io/os: linux manifests/0000_50_cluster-monitoring-operator_05-deployment-ibm-cloud-managed.yaml: beta.kubernetes.io/os: linux manifests/0000_50_cluster-monitoring-operator_05-deployment.yaml: beta.kubernetes.io/os: linux
I created PR's to respective upstream repo's and downstream ones. I will update CMO once these are merged
All other upstream and downstream PR's merged. Once https://github.com/prometheus-operator/kube-prometheus/pull/1348 is also merged will sync CMO to reflect changes
4.9.0-0.nightly-2021-08-30-192239, nodeSelector is now kubernetes.io/os: linux for each component # for i in $(oc -n openshift-monitoring get pod | grep -v NAME | awk '{print $1}'); do echo $i; oc -n openshift-monitoring get pod $i -oyaml | grep nodeSelector -A3 | grep "kubernetes.io/os: linux"; echo -e "\n"; done alertmanager-main-0 kubernetes.io/os: linux alertmanager-main-1 kubernetes.io/os: linux alertmanager-main-2 kubernetes.io/os: linux cluster-monitoring-operator-6b5fcf7686-svq52 kubernetes.io/os: linux grafana-746cb66c84-r7rvc kubernetes.io/os: linux kube-state-metrics-59b87859b8-6d9w6 kubernetes.io/os: linux node-exporter-5jwh2 kubernetes.io/os: linux node-exporter-6hdd8 kubernetes.io/os: linux node-exporter-br7n8 kubernetes.io/os: linux node-exporter-gfgb6 kubernetes.io/os: linux node-exporter-jnh87 kubernetes.io/os: linux node-exporter-lsr6t kubernetes.io/os: linux openshift-state-metrics-66585c8c7c-5q628 kubernetes.io/os: linux prometheus-adapter-7bf848895b-pjs9k kubernetes.io/os: linux prometheus-adapter-7bf848895b-zghfd kubernetes.io/os: linux prometheus-k8s-0 kubernetes.io/os: linux prometheus-k8s-1 kubernetes.io/os: linux prometheus-operator-78b5644557-2bsx4 kubernetes.io/os: linux telemeter-client-7cd99bf8fb-22v24 kubernetes.io/os: linux thanos-querier-897886585-85qpc kubernetes.io/os: linux thanos-querier-897886585-wwx5s kubernetes.io/os: linux
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3759