Description of problem:
After ocp-3.5 env installation, router pod didn't get running.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
[root@gpei-test-35-master-1 ~]# oc get pod
NAME READY STATUS RESTARTS AGE
router-1-7s67u 0/1 CrashLoopBackOff 6 7m
router-1-deploy 1/1 Running 0 8m
[root@gpei-test-35-master-1 ~]# oc logs router-1-7s67u
I0112 02:51:28.670582 1 router.go:229] Router is including routes in all namespaces
E0112 02:51:28.692136 1 ratelimiter.go:52] error executing template for file /var/lib/haproxy/conf/haproxy.config: template: haproxy-config.template:97:6: executing "/var/lib/haproxy/conf/haproxy.config" at <.BindPorts>: can't evaluate field BindPorts in type templaterouter.templateData
E0112 02:51:33.711318 1 ratelimiter.go:52] error executing template for file /var/lib/haproxy/conf/haproxy.config: template: haproxy-config.template:97:6: executing "/var/lib/haproxy/conf/haproxy.config" at <.BindPorts>: can't evaluate field BindPorts in type templaterouter.templateData
Assigning to you Troy as I suspect that this is a problem with matching content in the docker images to the build but I'm not certain.
Raising severity to urgent since this blocks all testing requiring application traffic into the cluster.
Lowering the priority. As per sdodson's comment, this is likely due to the binary and template in the image not being in sync. A 3.5 binary will supply the BindPorts variable to the template. Rebulding the router image to ensure a current 3.5 binary should fix the problem.
The 126.96.36.199 images builds were pointing to the wrong (3.4) repository, so half of it's stuff was from 3.4, half from 3.5.
Todays build (v188.8.131.52) has this corrected and should work properly.
I'll update this bug when the images have finished building and are pushed for testing.
v184.108.40.206 images have been created and pushed to testable areas.
Please let me know if this has been fixed.
Test with openshift3/ose-haproxy-router:v220.127.116.11
Router pod was running well and no error shown in the log. Thanks for the fix!
Since this was never released to customers, I am closing the bug.