Description of problem: 4.10 CI releases still report Kubernetes Version: v1.22.1-4609+60f5a1c6c03d74-dirty instead of the expected 1.23 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. ask cluster bot for a ci cluster (not nightly) 2. oc version Actual results: Cluster version Client Version: 4.9.15 Server Version: 4.10.0-0.ci.test-2022-01-19-160615-ci-op-03yxzv53-latest Kubernetes Version: v1.22.1-4609+60f5a1c6c03d74-dirty Expected results: something more like what's in the nightly: # oc version Client Version: 4.10.0-0.nightly-2022-01-18-044014 Server Version: 4.10.0-0.nightly-2022-01-18-044014 Kubernetes Version: v1.23.0+60f5a1c Additional info: slack thread https://coreos.slack.com/archives/C01CQA76KMX/p1642615280354600
A workaround to push the tags is in place for now. We are continuing on debugging where the changes need to be made in the release tooling. https://github.com/openshift/kubernetes/pull/1193/
Did a quick test with ci release, $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.0-0.ci-2022-02-23-183447 True False 2m57s Cluster version is 4.10.0-0.ci-2022-02-23-183447 $ oc get no NAME STATUS ROLES AGE VERSION ci-ln-mfpidjt-72292-4xhlm-master-0 Ready master 27m v1.23.3+3525ccf ci-ln-mfpidjt-72292-4xhlm-master-1 Ready master 27m v1.23.3+3525ccf ci-ln-mfpidjt-72292-4xhlm-master-2 Ready master 27m v1.23.3+3525ccf ci-ln-mfpidjt-72292-4xhlm-worker-a-v5np4 Ready worker 18m v1.23.3+3525ccf ci-ln-mfpidjt-72292-4xhlm-worker-b-ltgks Ready worker 18m v1.23.3+3525ccf ci-ln-mfpidjt-72292-4xhlm-worker-c-hbcz7 Ready worker 18m v1.23.3+3525ccf $ oc version Client Version: 4.9.0-0.nightly-2021-11-18-000209 Server Version: 4.10.0-0.ci-2022-02-23-183447 Kubernetes Version: v1.23.3-1997+3525ccf9da4b8b-dirty
Since above kubernetes version including 'dirty', that's not as expected.
This was done in: https://github.com/openshift/kubernetes/pull/1196
Did some quick verification steps: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.0-0.nightly-2022-03-08-002944 True False 53m Cluster version is 4.10.0-0.nightly-2022-03-08-002944 $ oc get no NAME STATUS ROLES AGE VERSION ip-10-0-129-10.us-east-2.compute.internal Ready worker 70m v1.23.3+e419edf ip-10-0-143-103.us-east-2.compute.internal Ready master 76m v1.23.3+e419edf ip-10-0-161-175.us-east-2.compute.internal Ready worker 66m v1.23.3+e419edf ip-10-0-190-240.us-east-2.compute.internal Ready master 76m v1.23.3+e419edf ip-10-0-220-199.us-east-2.compute.internal Ready master 77m v1.23.3+e419edf $ oc version Client Version: 4.10.1 Server Version: 4.10.0-0.nightly-2022-03-08-002944 Kubernetes Version: v1.23.3+e419edf As you can see, 4.10 CI returns Kubernetes version v1.23.3+e419edf, which is expected.
Please see previous comments and seems should not use "nightly" payload for this bug.
Sorry for the mistake. I rebuild the cluster with 4.10.0.0ci $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.0-0.ci-2022-03-08-135924 True False 5m10s Cluster version is 4.10.0-0.ci-2022-03-08-135924 $ oc get no NAME STATUS ROLES AGE VERSION ci-ln-ivvddzb-72292-ms7lf-master-0 Ready master 23m v1.23.3+e419edf ci-ln-ivvddzb-72292-ms7lf-master-1 Ready master 23m v1.23.3+e419edf ci-ln-ivvddzb-72292-ms7lf-master-2 Ready master 23m v1.23.3+e419edf ci-ln-ivvddzb-72292-ms7lf-worker-a-5rm6v Ready worker 15m v1.23.3+e419edf ci-ln-ivvddzb-72292-ms7lf-worker-b-c8sw8 Ready worker 14m v1.23.3+e419edf ci-ln-ivvddzb-72292-ms7lf-worker-c-m5mmt Ready worker 15m v1.23.3+e419edf $ oc version Client Version: 4.10.1 Server Version: 4.10.0-0.ci-2022-03-08-135924 Kubernetes Version: v1.23.3-2003+e419edff267ffa-dirty As you can see, Kubernetes still contains dirty version, which is not expected. So assign back, let Dev double confirm this issue.
(In reply to Zimo Xiao from comment #9) > $ oc version > Client Version: 4.10.1 > Server Version: 4.10.0-0.ci-2022-03-08-135924 > Kubernetes Version: v1.23.3-2003+e419edff267ffa-dirty > > As you can see, Kubernetes still contains dirty version, which is not > expected. So assign back, let Dev double confirm this issue. dirty is fine there, this particular issue was about the k8s version reported, which I see correctly reports here 1.23.3 Moving back to modified.
According to 2049603#c10, the Kubernetes version is correct. So move it VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.10.4 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:0811