Description of problem: The upgrade script is supposed to upgrade pods to latest versions, however names of the pods are hardcoded, so for example if a router is named differently (or there's several routers) it will not get patched. Version-Release number of selected component (if applicable): atomic-openshift-utils-3.0.35-1.git.0.6a386dd.el7aos.noarch How reproducible: Always Steps to Reproduce: 1. install OpenShift Enterprise 3.1.0 and create a router named "r1" 3. deploy a router following the official documentation https://access.redhat.com/documentation/en/openshift-enterprise/version-3.1/installation-and-configuration/deploy_router.xml#deploying-a-router the docs say the following: To create a router if it does not exist: $ oadm router <router_name> --replicas=<number> \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router So the user can choose any name for the router, for example "test_router" 4. follow the upgrade process to 3.1.1 https://access.redhat.com/documentation/en/openshift-enterprise/version-3.1/installation-and-configuration/#upgrading-to-openshift-enterprise-3-1-asynchronous-releases Actual results: router pod is not updated since the ansible script will only upgrade it if the name is "router" Expected results: router updated regardless what its name is. Alternatively, documentation is provided on how to patch routers manually. Additional info: Here's the playbook that does the patching /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/v3_1_minor/post.yml it's clearly visible that it will only patch the router if the deploymentConfig is named "dc/router"
We definitely need to find a way to upgrade all routers and registries installed by oadm. My thought it we could perform a query like this: oc get pods --all-namespaces -l 'router' ...and then update the image spec appropriately.
I think that the approach we should take is to write a module that looks at all of the `oadm registry` or `oadm router` options of our current version of `oadm` then extracts those values if set from the currently defined registry and rotuer. It then re-generates the deployment config and service from the newly generated values and performs an additive only merge of the two and updates the appropriate objects.
Sorry, that's overly complex. It's simpler to, after upgrading the oadm binary, run `oadm router` to generate a default template. Then overwrite that with all values from the current DC and SVC.
If we are going to query all namespaces for the router label, we'll have to take care to only update ones that are using our images to avoid breaking a customized router instance.
*** Bug 1304943 has been marked as a duplicate of this bug. ***
This is currently being worked on here: https://github.com/openshift/openshift-ansible/pull/1377/ For QE, this will require a fair amount of testing. I would suggest the following cases at a minimum: * 3.0.z -> 3.1.1 upgrade * 3.0.z -> 3.1.0.4 upgrade using ansible directly and set openshift_pkg_version Then for both of those scenarios test: * zero routers * default router, non-default named router (possibly in non-default namespace) and a "custom" router. To create the custom router I have been launching another pod and simply giving it the router label. The idea here is to make sure ansible _only_ updates pods using the haproxy router image from Red Hat
Verified and pass with atomic-openshift-utils-3.0.40.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:0311