Hide Forgot
We need our platform components to set the system-cluster-critical priorityClassName on their workloads. More detail here: https://github.com/openshift/origin/pull/22217 For OLM this means the deployments that generate these pods: openshift-operator-lifecycle-manager/catalog-operator-7d576d6674-q7n4m, openshift-operator-lifecycle-manager/olm-operator-5b648b4b75-cxltx, openshift-operator-lifecycle-manager/olm-operators-pwcrx, openshift-operator-lifecycle-manager/packageserver-59bcd597f5-9jv4b, openshift-operator-lifecycle-manager/packageserver-59bcd597f5-zttsd
PR here: https://github.com/operator-framework/operator-lifecycle-manager/pull/775
Verify failed, details as below: OLM version: io.openshift.build.commit.id=b853cc5c5754360f5e8f7404f6c3e1526986eb63 [jzhang@dhcp-140-46 ~]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.0.0-0.nightly-2019-04-10-182914 True False 21h Cluster version is 4.0.0-0.nightly-2019-04-10-182914 [jzhang@dhcp-140-46 ~]$ oc get deployment NAME READY UP-TO-DATE AVAILABLE AGE catalog-operator 1/1 1 1 21h olm-operator 1/1 1 1 21h packageserver 2/2 2 2 21h I didn't find "system-cluster-critical" setting. Like below: [jzhang@dhcp-140-46 ~]$ oc get deployment -o yaml |grep -i "priorityClassName"
https://github.com/operator-framework/operator-lifecycle-manager/blob/master/manifests/0000_50_olm_07-olm-operator.deployment.yaml#L34 It's definitely there, it may not have made it into the build you're using yet
Evan, I don't think so. As below, you can see `priorityClassName: "system-cluster-critical"` is in the payload: [jzhang@dhcp-140-46 aws-cluster]$ oc adm release extract --from=registry.svc.ci.openshift.org/ocp/release:4.0.0-0.nightly-2019-04-10-182914 --to=182914-payload [jzhang@dhcp-140-46 182914-payload]$ cat 0000_50_olm_07-olm-operator.deployment.yaml |grep priorityClassName priorityClassName: "system-cluster-critical" But, it did not install correctly in the cluster, it's strange, as below: [jzhang@dhcp-140-46 ~]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.0.0-0.nightly-2019-04-10-182914 True False 20h Cluster version is 4.0.0-0.nightly-2019-04-10-182914 [jzhang@dhcp-140-46 ~]$ oc get deployment olm-operator -o yaml|grep -i "priorityClassName" [jzhang@dhcp-140-46 ~]$ oc get deployment NAME READY UP-TO-DATE AVAILABLE AGE catalog-operator 1/1 1 1 20h olm-operator 1/1 1 1 20h packageserver 2/2 2 2 20h [jzhang@dhcp-140-46 ~]$ oc get deployment -o yaml |grep -i "priorityClassName"
Aha, wrong place for the `priorityClassName` property. I submitted a PR: https://github.com/operator-framework/operator-lifecycle-manager/pull/817 for this.
LGTM, verify it. Details as below: Cluster version is 4.1.0-0.nightly-2019-04-28-064010 OLM version: io.openshift.build.commit.id=49ca4c57934ed4b0c974ce9bc3af354d5fc7146b mac:~ jianzhang$ oc get deployment NAME READY UP-TO-DATE AVAILABLE AGE catalog-operator 1/1 1 1 23h olm-operator 1/1 1 1 23h packageserver 2/2 2 2 23h mac:~ jianzhang$ oc get deployment -o yaml |grep -i "priorityClassName" priorityClassName: system-cluster-critical priorityClassName: system-cluster-critical priorityClassName: system-cluster-critical
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758