+++ This bug was initially created as a clone of Bug #1457330 +++ Description of problem: For egress connectivity of customers, we create many custom routers with 'oc adm router' command. Each of those routers uses specific ports for http, https, stats, ... The liveness probe port of a router is typically set to the stats port. When upgrading, we experienced that the routers ended up with CrashLoopBackoff after the image version is patched by the upgrade scripts. This is because the liveness probe port was the same as the statistics port before the upgrade, after the upgrade, the liveness probe port is set to 1936 for all routers. Version-Release number of selected component (if applicable): 3.4.1.18 How reproducible: Always Steps to Reproduce: 1. Create custom router with specific port for stats 2. Upgrade the environment 3. Actual results: The routers ended up with CrashLoopBackoff after the image version is patched by the upgrade scripts. We then normally recreated the routers by deleting them and recreating them. Expected results: Routers to keep using the specific ports. Additional info: We then normally recreated the routers by deleting them and recreating them. During last upgrade, we really investigated what causes the issue and during upgrade of a prod environment last night, we could as well collect the deployment-configs before and after the upgrade. As you can see in the attachments, the liveness probe port was the same as the statistics port before the upgrade, after the upgrade, the liveness probe port is set to 1936 for all routers. In the ansible log we can see when this happens: -> "{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"router\",\"image\":\"openshift3/ose-haproxy-router:v3.4.1.18\",\"livenessProbe\":{\"tcpSocket\":null,\"httpGet\":{\"path\": \"/healthz\", \"port\": 1936, \"host\": \"localhost\", \"scheme\": \"HTTP\"},\"initialDelaySeconds\":10,\"timeoutSeconds\":1}}]}}}}", "--api-version=v1" --- Additional comment from Javier Ramirez on 2017-05-31 10:16 EDT --- --- Additional comment from Javier Ramirez on 2017-05-31 10:16 EDT --- --- Additional comment from Javier Ramirez on 2017-05-31 10:17 EDT --- --- Additional comment from Javier Ramirez on 2017-05-31 10:19 EDT --- --- Additional comment from Russell Teague on 2017-06-16 16:13:45 EDT --- Scott, This bug was fixed[0] in 3.6 through the work for migrating to oc_* modules. This could potentially be backported to 3.5 since we have the modules, but I've not investigated the extent of that effort. Since we don't have the modules in 3.4 it would be a significant effort to backport to 3.4. What is your recommendation on moving forward with this issue? [0] https://github.com/openshift/openshift-ansible/pull/3897/files --- Additional comment from Scott Dodson on 2017-06-18 20:39:45 EDT --- Lets go ahead and backport to 3.5. Javier, it sounded like your customer was able to work around the problem. If we ensure that this is fixed in the playbooks to upgrade to 3.5 is that acceptable? --- Additional comment from Javier Ramirez on 2017-06-19 07:09:23 EDT --- (In reply to Scott Dodson from comment #6) > Lets go ahead and backport to 3.5. > > Javier, it sounded like your customer was able to work around the problem. > If we ensure that this is fixed in the playbooks to upgrade to 3.5 is that > acceptable? Yes, that sounds good for the customer. If you know the bz number for the 3.5 backport, please let me know.
Proposed: https://github.com/openshift/openshift-ansible/pull/4493
Merged: https://github.com/openshift/openshift-ansible/pull/4493
Verified with openshift-ansible-3.5.89. 1. Create router with stats-port=1937 oadm router --stats-port=1937 2. The port is still 1937 after upgrade [cloud-user@container--1 ~]$ oc get dc router -o json|grep -B 1 1937 "name": "STATS_PORT", "value": "1937" -- "path": "/healthz", "port": 1937, -- { "containerPort": 1937, "hostPort": 1937, -- "path": "/healthz", "port": 1937,
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1666