Description of problem: Earlier router was running fine. Then we added environment vars in dc/router from web console -> ROUTER_CIPHERS = modern TEMPLATE_FILE = /var/lib/haproxy/conf/custom/haproxy-config.template and add config maps customrouter to replace template It failed with router-7: About to stop retrying router-7: couldn't create deployer pod for default/router-7: unable to parse requirement: found '', expected: '=' We then tried deleting the dc, svc, sa, secret, configmap, pod of the router but it still gives the same error I saw this https://github.com/openshift/origin/issues/18127 which is a same issue as from oadm diagnostics we also see the same error. Version-Release number of selected component (if applicable): OCP 3.5 How reproducible: Always on customer env Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
The BZ suggests this might be an admission plugin issue, where: 1) The customer had working configuration 2) The customer updated the dc/router (adding those 2 env vars) 3) On update, the admission kicked in and mutated the DC in wrong way? (i don't see any 'bad' mutation inside the DC, other than the 'nodeSelector' is added to the DC (and missing in RC?). 4) After customer deleted DC,RC/etc.. and recreated, admission did the same thing as when updating, breaking the deployment...
openshift.io/node-selector: role-infra We added openshift.io/node-selector: role=infra i.e = sign which solved the issue
It's happen to be a configuration issue. Customer edit the namespace and change openshift.io/node-selector == from openshift.io/node-selector: role-infra ---------- Original is "role-infra" == to openshift.io/node-selector: role=infra ---------- Change the "-" to "=" , so it becomes "role=infra" Looks like it solves the issue. Thanks Miheer, Ryan and Michal.
Closing then.