Description of problem: For an IngressController object configured to use the "LoadBalancerService" or "Private" endpoint publishing strategy type, the ingress operator configures an ingress-controller deployment with a deployment strategy and affinity scheduling rule to prevent downtime during rolling updates and an anti-affinity scheduling rule to promote spread across nodes. [[https://github.com/openshift/cluster-ingress-operator/pull/343][openshift/cluster-ingress-operator#343]] added support for the "NodePortService" endpoint publishing strategy type, but the implementation fails to configure a deployment strategy or affinity policy on the ingress-controller deployment for an IngressController that uses this endpoint publishing strategy type. The operator should configure the same deployment strategy and affinity policy for all three endpoint publishing strategy types ("LoadBalancerService", "NodePortService", and "Private"). Version-Release number of selected component (if applicable): The "NodePortService" endpoint publishing strategy type is introduced in 4.4. Steps to Reproduce: 1. Create a new IngressController with the "NodePortService" endpoint publishing strategy type and another with the "Private" type: oc create -f - <<'EOF' apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: nodeport namespace: openshift-ingress-operator spec: replicas: 1 domain: npexample.com endpointPublishingStrategy: type: NodePortService EOF oc create -f - <<'EOF' apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: private namespace: openshift-ingress-operator spec: replicas: 1 domain: privateexample.com endpointPublishingStrategy: type: Private EOF 2. Check the Deployments that the operator creates: oc -n openshift-ingress get deployments/router-private -o $'jsonpath=.spec.strategy={.spec.strategy}\n.spec.template.spec.affinity={.spec.template.spec.affinity}\n' oc -n openshift-ingress get deployments/router-nodeport -o $'jsonpath=.spec.strategy={.spec.strategy}\n.spec.template.spec.affinity={.spec.template.spec.affinity}\n' Actual results: The Deployments have different deployment strategies and affinity policies: % oc -n openshift-ingress get deployments/router-private -o $'jsonpath=.spec.strategy={.spec.strategy}\n.spec.template.spec.affinity={.spec.template.spec.affinity}\n' .spec.strategy=map[rollingUpdate:map[maxSurge:1 maxUnavailable:0] type:RollingUpdate] .spec.template.spec.affinity=map[podAffinity:map[preferredDuringSchedulingIgnoredDuringExecution:[map[podAffinityTerm:map[labelSelector:map[matchExpressions:[map[key:ingresscontroller.operator.openshift.io/deployment-ingresscontroller operator:In values:[private]] map[key:ingresscontroller.operator.openshift.io/hash operator:NotIn values:[6b64598cc9]]]] topologyKey:kubernetes.io/hostname] weight:100]]] podAntiAffinity:map[requiredDuringSchedulingIgnoredDuringExecution:[map[labelSelector:map[matchExpressions:[map[key:ingresscontroller.operator.openshift.io/deployment-ingresscontroller operator:In values:[private]] map[key:ingresscontroller.operator.openshift.io/hash operator:In values:[6b64598cc9]]]] topologyKey:kubernetes.io/hostname]]]] % oc -n openshift-ingress get deployments/router-nodeport -o $'jsonpath=.spec.strategy={.spec.strategy}\n.spec.template.spec.affinity={.spec.template.spec.affinity}\n' .spec.strategy=map[rollingUpdate:map[maxSurge:25% maxUnavailable:25%] type:RollingUpdate] .spec.template.spec.affinity= % Expected results: The Deployments should have the same deployment strategy and affinity policy.
seems the maxSurge/maxUnavailable are also different, I wonder if that need fix as well? .spec.strategy=map[rollingUpdate:map[maxSurge:1 maxUnavailable:0] type:RollingUpdate] .spec.strategy=map[rollingUpdate:map[maxSurge:25% maxUnavailable:25%] type:RollingUpdate]
> seems the maxSurge/maxUnavailable are also different, I wonder if that need fix as well? Yes, the maxSurge and maxUnavailable parameters should both be as follows: .spec.strategy=map[rollingUpdate:map[maxSurge:1 maxUnavailable:0] type:RollingUpdate]
verified with 4.4.0-0.nightly-2020-02-12-235629 and issue has been fixed. $ oc -n openshift-ingress get deployments/router-nodeport -o $'jsonpath=.spec.strategy={.spec.strategy}\n.spec.template.spec.affinity={.spec.template.spec.affinity}\n' .spec.strategy=map[rollingUpdate:map[maxSurge:1 maxUnavailable:0] type:RollingUpdate] .spec.template.spec.affinity=map[podAffinity:map[preferredDuringSchedulingIgnoredDuringExecution:[map[podAffinityTerm:map[labelSelector:map[matchExpressions:[map[key:ingresscontroller.operator.openshift.io/deployment-ingresscontroller operator:In values:[nodeport]] map[key:ingresscontroller.operator.openshift.io/hash operator:NotIn values:[5485665fd]]]] topologyKey:kubernetes.io/hostname] weight:100]]] podAntiAffinity:map[requiredDuringSchedulingIgnoredDuringExecution:[map[labelSelector:map[matchExpressions:[map[key:ingresscontroller.operator.openshift.io/deployment-ingresscontroller operator:In values:[nodeport]] map[key:ingresscontroller.operator.openshift.io/hash operator:In values:[5485665fd]]]] topologyKey:kubernetes.io/hostname]]]] $ oc -n openshift-ingress get deployment router-nodeport -o yaml <---snip---> strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate template: spec: affinity: podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: ingresscontroller.operator.openshift.io/deployment-ingresscontroller operator: In values: - nodeport - key: ingresscontroller.operator.openshift.io/hash operator: NotIn values: - 5485665fd topologyKey: kubernetes.io/hostname weight: 100 podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: ingresscontroller.operator.openshift.io/deployment-ingresscontroller operator: In values: - nodeport - key: ingresscontroller.operator.openshift.io/hash operator: In values: - 5485665fd topologyKey: kubernetes.io/hostname
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581