Bug 1576295

Summary: Deploying router leads to error router-7: About to stop retrying router-7: couldn't create deployer pod for default/router-7: unable to parse requirement: found '', expected: '='
Product: OpenShift Container Platform Reporter: Miheer Salunke <misalunk>
Component: openshift-controller-managerAssignee: Michal Fojtik <mfojtik>
Status: CLOSED NOTABUG QA Contact: Wang Haoran <haowang>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 3.5.1CC: aos-bugs, jliggitt, misalunk, mluther, tnozicka
Target Milestone: ---   
Target Release: 3.5.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-05-10 11:46:11 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Miheer Salunke 2018-05-09 07:39:42 UTC
Description of problem:
Earlier router was running fine.

Then we added environment vars in dc/router from web console -> 
ROUTER_CIPHERS = modern
TEMPLATE_FILE = /var/lib/haproxy/conf/custom/haproxy-config.template
and add config maps customrouter  to replace template

It failed with 
router-7: About to stop retrying router-7: couldn't create deployer pod for default/router-7: unable to parse requirement: found '', expected: '='

We then tried deleting the dc, svc, sa, secret, configmap, pod of the router but it still gives the same error

I saw this https://github.com/openshift/origin/issues/18127  which is a same issue as from oadm diagnostics we also see the same error.

Version-Release number of selected component (if applicable):
OCP 3.5

How reproducible:
Always on customer env

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 7 Michal Fojtik 2018-05-09 09:11:50 UTC
The BZ suggests this might be an admission plugin issue, where:

1) The customer had working configuration
2) The customer updated the dc/router (adding those 2 env vars)
3) On update, the admission kicked in and mutated the DC in wrong way? (i don't see any 'bad' mutation inside the DC, other than the 'nodeSelector' is added to the DC (and missing in RC?).
4) After customer deleted DC,RC/etc.. and recreated, admission did the same thing as when updating, breaking the deployment...

Comment 10 Miheer Salunke 2018-05-10 04:48:53 UTC
 openshift.io/node-selector: role-infra

We added  openshift.io/node-selector: role=infra

i.e = sign which solved the issue

Comment 11 Marthen Luther 2018-05-10 05:07:34 UTC
It's happen to be a configuration issue.

Customer edit the namespace and change openshift.io/node-selector

== from
    openshift.io/node-selector: role-infra   ---------- Original is "role-infra"

== to
    openshift.io/node-selector: role=infra   ---------- Change the "-" to "=" ,  so it becomes "role=infra"


Looks like it solves the issue.

Thanks Miheer, Ryan and Michal.

Comment 12 Michal Fojtik 2018-05-10 11:46:11 UTC
Closing then.