Description of problem: the status.conditions of DNS operator is not updated after upgrade Version-Release number of selected component (if applicable): 4.0.0-0.nightly-2019-03-18-200009 How reproducible: always Steps to Reproduce: 1. install cluster with 4.0.0-0.nightly-2019-03-15-043409 2. $ oc get clusteroperator dns -o yaml 3. upgrade to 4.0.0-0.nightly-2019-03-18-200009 4. $ oc get clusteroperator dns -o yaml Actual results: the status.conditions is not updated and still show old timestamp. ---> step2 (before upgrade): apiVersion: config.openshift.io/v1 kind: ClusterOperator metadata: creationTimestamp: 2019-03-18T09:38:08Z generation: 1 name: dns resourceVersion: "2945" selfLink: /apis/config.openshift.io/v1/clusteroperators/dns uid: 89ac4631-4961-11e9-99b1-06983a957c1e spec: {} status: conditions: - lastTransitionTime: 2019-03-18T09:38:09Z status: "False" type: Failing - lastTransitionTime: 2019-03-18T09:38:09Z status: "False" type: Progressing - lastTransitionTime: 2019-03-18T09:38:34Z status: "True" type: Available extension: null relatedObjects: - group: "" name: openshift-dns-operator resource: namespaces - group: "" name: openshift-dns resource: namespaces versions: - name: operator version: 4.0.0-0.nightly-2019-03-15-043409 - name: coredns version: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9e3107ae30e589be00c2c28fab543e0c05a1dff5188de479cf37693834eabe1 ---> step4 (after upgrade): apiVersion: config.openshift.io/v1 kind: ClusterOperator metadata: creationTimestamp: 2019-03-18T09:38:08Z generation: 1 name: dns resourceVersion: "731397" selfLink: /apis/config.openshift.io/v1/clusteroperators/dns uid: 89ac4631-4961-11e9-99b1-06983a957c1e spec: {} status: conditions: - lastTransitionTime: 2019-03-18T09:38:09Z status: "False" type: Failing - lastTransitionTime: 2019-03-18T09:38:09Z status: "False" type: Progressing - lastTransitionTime: 2019-03-18T09:38:34Z status: "True" type: Available extension: null relatedObjects: - group: "" name: openshift-dns-operator resource: namespaces - group: "" name: openshift-dns resource: namespaces versions: - name: operator version: 4.0.0-0.nightly-2019-03-18-200009 - name: coredns version: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9e3107ae30e589be00c2c28fab543e0c05a1dff5188de479cf37693834eabe1 Expected results: status.conditions should be updated after upgrade Additional info:
This bug appears to share the same fundamental problem as [1]. I believe [2] will fix [1] and the operator upgrade use case will be tested after the PR is merged. If [2] does fix [1], the approach in [2] can be duplicated to fix this bug. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1690333 [2] https://github.com/openshift/cluster-ingress-operator/pull/175
*** Bug 1695204 has been marked as a duplicate of this bug. ***
https://github.com/openshift/cluster-version-operator/pull/154 will document this and an e2e test will verify it in the future post-upgrade
Increasing severity to match other bugs. Operators MUST report accurate status post upgrade or will cause customer confusion.
PR https://github.com/openshift/cluster-dns-operator/pull/88 has been submitted to fix this bug.
The PR is still open,move the bug to MODIFIED status.
Run the test with 2 clusters. Had same results. status.conditions looked the same before/after upgrade. $ oc get clusterversions.config.openshift.io NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.1.0-0.nightly-2019-05-03-045206 True False 3m1s Cluster version is 4.1.0-0.nightly-2019-05-03-045206 $ oc get clusteroperator dns -o yaml apiVersion: config.openshift.io/v1 kind: ClusterOperator metadata: creationTimestamp: "2019-05-03T15:22:55Z" generation: 1 name: dns resourceVersion: "3036" selfLink: /apis/config.openshift.io/v1/clusteroperators/dns uid: 535a16c4-6db7-11e9-b6f4-02be967dc830 spec: {} status: conditions: - lastTransitionTime: "2019-05-03T15:23:10Z" message: All desired DNS DaemonSets available and operand Namespace exists reason: AsExpected status: "False" type: Degraded - lastTransitionTime: "2019-05-03T15:23:10Z" message: Desired and available number of DNS DaemonSets are equal reason: AsExpected status: "False" type: Progressing - lastTransitionTime: "2019-05-03T15:23:10Z" message: At least 1 DNS DaemonSet available reason: AsExpected status: "True" type: Available extension: null relatedObjects: - group: "" name: openshift-dns-operator resource: namespaces - group: "" name: openshift-dns resource: namespaces versions: - name: operator version: 4.1.0-0.nightly-2019-05-03-045206 - name: coredns version: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f2209f60c556962fa4bcf681ea3996bb47cb2f7ae18a4129d20827b71238b13 - name: openshift-cli version: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f263410c83d80fefedbad910042c4648d7e81b339011709e8c238f456e9e6e2 $ oc version Client Version: version.Info{Major:"4", Minor:"1+", GitVersion:"v4.1.0-201905030232+533d5bc-dirty", GitCommit:"533d5bc", GitTreeState:"dirty", BuildDate:"2019-05-03T07:08:56Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.4+b9f554d", GitCommit:"b9f554d", GitTreeState:"clean", BuildDate:"2019-05-03T03:32:34Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} $ oc adm upgrade --to-image=registry.svc.ci.openshift.org/ocp/release:4.1.0-0.nightly-2019-05-03-121122 --force $ oc get clusterversions.config.openshift.io NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.1.0-0.nightly-2019-05-03-121122 True False 112m Cluster version is 4.1.0-0.nightly-2019-05-03-121122 $ oc get clusteroperator dns -o yaml apiVersion: config.openshift.io/v1 kind: ClusterOperator metadata: creationTimestamp: "2019-05-03T15:22:55Z" generation: 1 name: dns resourceVersion: "23442" selfLink: /apis/config.openshift.io/v1/clusteroperators/dns uid: 535a16c4-6db7-11e9-b6f4-02be967dc830 spec: {} status: conditions: - lastTransitionTime: "2019-05-03T15:23:10Z" message: All desired DNS DaemonSets available and operand Namespace exists reason: AsExpected status: "False" type: Degraded - lastTransitionTime: "2019-05-03T15:23:10Z" message: Desired and available number of DNS DaemonSets are equal reason: AsExpected status: "False" type: Progressing - lastTransitionTime: "2019-05-03T15:23:10Z" message: At least 1 DNS DaemonSet available reason: AsExpected status: "True" type: Available extension: null relatedObjects: - group: "" name: openshift-dns-operator resource: namespaces - group: "" name: openshift-dns resource: namespaces versions: - name: operator version: 4.1.0-0.nightly-2019-05-03-121122 - name: coredns version: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f2209f60c556962fa4bcf681ea3996bb47cb2f7ae18a4129d20827b71238b13 - name: openshift-cli version: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e8679cfc635d7037e933ef025725bf4c000b7d89b0486d1cef063fc2007c122
Looks like it's not just DNS clusteroperator. Many other operators are not updating after a successful upgrade either pruan@dhcp-91-104 ~/Downloads $ oc get clusteroperators [ruby-2.6.0] NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.1.0-0.nightly-2019-05-03-152614 True False False 13h cloud-credential 4.1.0-0.nightly-2019-05-03-152614 True False False 13h cluster-autoscaler 4.1.0-0.nightly-2019-05-03-152614 True False False 13h console 4.1.0-0.nightly-2019-05-03-152614 True False False 13h dns 4.1.0-0.nightly-2019-05-03-152614 True False False 13h image-registry 4.1.0-0.nightly-2019-05-03-152614 True False False 8h ingress 4.1.0-0.nightly-2019-05-03-152614 True False False 13h kube-apiserver 4.1.0-0.nightly-2019-05-03-152614 True False False 13h kube-controller-manager 4.1.0-0.nightly-2019-05-03-152614 True False False 13h kube-scheduler 4.1.0-0.nightly-2019-05-03-152614 True False False 13h machine-api 4.1.0-0.nightly-2019-05-03-152614 True False False 13h machine-config 4.1.0-0.nightly-2019-05-03-152614 True False False 13h marketplace 4.1.0-0.nightly-2019-05-03-152614 True False False 49m monitoring 4.1.0-0.nightly-2019-05-03-152614 True False False 46m network 4.1.0-0.nightly-2019-05-03-152614 True False False 13h node-tuning 4.1.0-0.nightly-2019-05-03-152614 True False False 62m openshift-apiserver 4.1.0-0.nightly-2019-05-03-152614 True False False 8h openshift-controller-manager 4.1.0-0.nightly-2019-05-03-152614 True False False 13h openshift-samples 4.1.0-0.nightly-2019-05-03-152614 True False False 62m operator-lifecycle-manager 4.1.0-0.nightly-2019-05-03-152614 True False False 13h operator-lifecycle-manager-catalog 4.1.0-0.nightly-2019-05-03-152614 True False False 13h service-ca 4.1.0-0.nightly-2019-05-03-152614 True False False 13h service-catalog-apiserver 4.1.0-0.nightly-2019-05-03-152614 True False False 13h service-catalog-controller-manager 4.1.0-0.nightly-2019-05-03-152614 True False False 13h storage 4.1.0-0.nightly-2019-05-03-152614 True False False 62m
https://bugzilla.redhat.com/show_bug.cgi?id=1690342#c12 Please note that "Since" isn't the criteria here — you need to check specifically whether the Progressing last transition time was updated for this particular bug. The "since" value may not align with the progressing transition time.
PR: https://github.com/openshift/cluster-dns-operator/pull/104
Follow-up PR: https://github.com/openshift/cluster-dns-operator/pull/106
(In reply to Dan Mace from comment #13) > https://bugzilla.redhat.com/show_bug.cgi?id=1690342#c12 Please note that > "Since" isn't the criteria here — you need to check specifically whether the > Progressing last transition time was updated for this particular bug. The > "since" value may not align with the progressing transition time. Hi Dan, do you mean only lastTransitionTime of `Progressing` should be updated during upgrade? The lastTransitionTime of `Available` and `Degraded` will keep same, right? Does `SINCE` align with `Available` transition time? I created a new ingresscontroller and saw the three items `Progressing`, `Available` and `SINCE` were updated.
Yes, the "SINCE" column is from the Available condition: https://github.com/openshift/cluster-version-operator/blob/6750bf41fa6b291d29c373e2dae6db3892e08c32/install/0000_00_cluster-version-operator_01_clusteroperator.crd.yaml#L23-L26 We need to make sure that the last transition time of the Progressing condition is updated, whether or not other conditions are updated. To get the progressing condition's last transition time, you can use json or yaml output, or you can use custom columns as in the following command: oc get clusteroperators -o 'custom-columns=NAME:.metadata.name,VERSION:.status.versions[?(@.name=="operator")].version,PROGRESSING LAST TRANSITION:.status.conditions[?(@.type=="Progressing")].lastTransitionTime'
Thank you for your reply, Miciah. I have upgraded a cluster from 4.1.0-0.nightly-2019-05-08-012425 to 4.1.0-0.nightly-2019-05-08-065958 and the issue has been fixed. status: conditions: - lastTransitionTime: "2019-05-09T02:31:54Z" message: All desired DNS DaemonSets available and operand Namespace exists reason: AsExpected status: "False" type: Degraded - lastTransitionTime: "2019-05-09T06:43:36Z" message: Desired and available number of DNS DaemonSets are equal reason: AsExpected status: "False" type: Progressing - lastTransitionTime: "2019-05-09T02:31:54Z" message: At least 1 DNS DaemonSet available reason: AsExpected status: "True" type: Available
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758