Description of problem: When upgrading from OSP13 to OSP14 with a subscribed overcloud, the upgrade fails with the following error: Tuesday 18 June 2019 06:33:23 -0400 (0:00:00.320) 0:00:53.896 ********** [37/1889] ok: [controller-0] => { "changed": false, "msg": "All assertions passed" } TASK [redhat-subscription : Inform the operators if both rhsm_activation_key and rhsm_repos are given] *** Tuesday 18 June 2019 06:33:23 -0400 (0:00:00.107) 0:00:54.004 ********** skipping: [controller-0] => {} TASK [redhat-subscription : Configure Red Hat Subscription Manager] ************ Tuesday 18 June 2019 06:33:23 -0400 (0:00:00.092) 0:00:54.096 ********** changed: [controller-0] => {"changed": true, "checksum": "cb8bb3a1b3e74455a3a35d05b78f2d15b0709d5a", "dest": "/etc/rhsm/rhsm.conf", "gid": 0, "group": "root", "md5sum": "af4 8d89cf5fdb3774a474913bd713002", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 603, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp -1560854003.87-143905135939114/source", "state": "file", "uid": 0} TASK [redhat-subscription : include_tasks] ************************************* Tuesday 18 June 2019 06:33:24 -0400 (0:00:00.640) 0:00:54.736 ********** included: /usr/share/ansible/roles/redhat-subscription/tasks/portal.yml for controller-0 TASK [redhat-subscription : Manage Red Hat subscription] *********************** Tuesday 18 June 2019 06:33:24 -0400 (0:00:00.182) 0:00:54.919 ********** changed: [controller-0] => {"changed": true, "msg": "System successfully registered to 'None'.", "subscribed_pool_ids": {"8a85f98b6494e37f0164caf879ee156b": "1"}} TASK [redhat-subscription : Configure repository subscriptions] **************** Tuesday 18 June 2019 06:33:59 -0400 (0:00:35.078) 0:01:29.998 ********** fatal: [controller-0]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (rhsm_repository) module: purge Supported parameters include: name, state"} NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* controller-0 : ok=137 changed=63 unreachable=0 failed=1 Tuesday 18 June 2019 06:34:00 -0400 (0:00:00.411) 0:01:30.410 ********** =============================================================================== Ansible failed, check log at /var/lib/mistral/572bed3a-3e12-4040-b69c-9adc2bf39331/ansible.log. The option seems to be available in the rhsm_repository.py library included in the redhat-subscription role: (undercloud) [stack@undercloud-0 ansible]$ grep "purge" /usr/share/ansible/roles/redhat-subscription/library/rhsm_repository.py purge: purge: True def repository_modify(module, state, name, purge=False): if purge: purge=dict(type='bool', default=False), purge = module.params['purge'] repository_modify(module, state, name, purge) /usr/share/ansible/roles/redhat-subscription/library/rhsm_repository.py , however as the rhsm_repository module is now part of the ansible installation the ansible_redhat_subscription role is executing the module code from the default ansible installation, instead of the one included in the role. Where the rhsm_repository module does not include the purge option in the ansible version 2.6.11: (undercloud) [stack@undercloud-0 ~]$ grep "purge" /lib/python2.7/site-packages/ansible/modules/packaging/os/rhsm_repository.py (undercloud) [stack@undercloud-0 ~]$ Version-Release number of selected component (if applicable): (undercloud) [stack@undercloud-0 ~]$ ansible-playbook --version ansible-playbook 2.6.11 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/stack/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible-playbook python version = 2.7.5 (default, May 20 2019, 12:21:26) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] python2-tripleo-common-9.5.0-5.el7ost.noarch openstack-tripleo-common-9.5.0-5.el7ost.noarch openstack-tripleo-common-containers-9.5.0-5.el7ost.noarch ansible-role-redhat-subscription-1.0.2-2.el7ost.noarch How reproducible: Steps to Reproduce: 1. Deploy OSP13 with old rhsm service 2. Upgrade the UC to OSP14 and configure the the rhsm service with OSP14 credentials (portal ones, not Satellite) 3. Run the overcloud upgrade prepare and overcloud upgrade run passing the rhsm.yaml env variable. Actual results: The upgrade fails because it can't find the purge option in rhsm_repository module Expected results: Succeded upgrade Additional info:
As a workaround, the ANSIBLE_LIBRARY path can be set [1] or Ansible 2.8 can be used which is when the rhsm_repository module added the purge argument. [2] 1. https://docs.ansible.com/ansible/latest/reference_appendices/config.html#default-module-path 2. https://docs.ansible.com/ansible/2.8/modules/rhsm_repository_module.html#parameters
I executed the same command from the mistral_executor container with verbose logging, and this is the result of the failing task: TASK [redhat-subscription : Configure repository subscriptions] ************************************************************************************************************* task path: /usr/share/ansible/roles/redhat-subscription/tasks/portal.yml:29 Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/rhsm_repository.py <192.168.24.11> ESTABLISH SSH CONNECTION FOR USER: tripleo-admin <192.168.24.11> SSH: EXEC ssh -vvv -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ControlMaster=auto -o ControlPersist=30m -o ServerAliveInterval=5 -o Server AliveCountMax=5 -o 'IdentityFile="/var/lib/mistral/572bed3a-3e12-4040-b69c-9adc2bf39331/ssh_private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssa pi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=tripleo-admin -o ConnectTimeout=30 -o ControlPath=/var/lib/mistral/572bed3a-3e12-4040-b69c- 9adc2bf39331/ansible-ssh/8af4971746 192.168.24.11 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-jxnkkqemavbhfjmkhmxyjjhschsxnrmo; /u sr/bin/python'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Escalation succeeded <192.168.24.11> (1, '\n{"msg": "Unsupported parameters for (rhsm_repository) module: purge Supported parameters include: name, state", "failed": true, "invocation": {"module _args": {"purge": true, "name": ["rhel-7-server-rpms", "rhel-7-server-extras-rpms", "rhel-7-server-rh-common-rpms", "rhel-ha-for-rhel-7-server-rpms", "rhel-7-server-openstac k-14-rpms"]}}}\n', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applyin g options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_f orwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_a live: done pid = 14739\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_pac ket: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n') fatal: [controller-0]: FAILED! => { "changed": false, "invocation": { "module_args": { "name": [ "rhel-7-server-rpms", "rhel-7-server-extras-rpms", "rhel-7-server-rh-common-rpms", "rhel-ha-for-rhel-7-server-rpms", "rhel-7-server-openstack-14-rpms" ], "purge": true } }, "msg": "Unsupported parameters for (rhsm_repository) module: purge Supported parameters include: name, state" } NO MORE HOSTS LEFT ********************************************************************************************************************************************************** PLAY RECAP ****************************************************************************************************************************************************************** controller-0 : ok=137 changed=62 unreachable=0 failed=1 As supposed initially, the invoked module is /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/rhsm_repository.py instead of /usr/share/ansible/roles/redhat-subscription/library/rhsm_repository.py.
This is most likely happening due to a rhsm_repository task outside the role being run before the rhsm_repository task in the role. The plugin loader looks for a plugin the first time it is encountered then stores that path and uses it for the duration of the command execution. The debug output shows it is indeed using the module from outside the role. I bet if you look at the entire list of tasks that are run, there is an rhsm_repository task before the task in the role. ``` Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/rhsm_repository.py ``` The easiest fix is to rename the module in the role and change the task in the role to match the module name. Once we move to Ansible 2.8, the module can be removed from within the role and task can be change back to rhsm_repository.
Jose, thanks for the update. Sam, thanks for your insight. It appears that the "ansible-role-redhat-subscription" RPM, which is out of our (RHOSP team's) control, provides the custom module and the related task file. It is the only role on my test Undercloud that I had installed that includes a custom module. This has become unnecessary with the release of Ansible 2.8, however I do understand that some folks may still want to stick with older Ansible versions. Related files: =============== RPM: ansible-role-redhat-subscription /usr/share/ansible/roles/redhat-subscription/tasks/portal.yml /usr/share/ansible/roles/redhat-subscription/library/rhsm_repository.py =============== RPM: ansible /usr/lib/python3.7/site-packages/ansible/modules/packaging/os/rhsm_repository.py =============== For now, there are a handful of workarounds in this BZ that have been documented. As a side note, in RHOSP 13 and 14 we technically only require Ansible >= 2.5 for config-download which uses the "dictsort" filter and the "loop" keyword. In RHOSP 15 we will be requiring Ansible 2.8 for RHEL 8 management. We are looking into adding validations for the Ansible version in a future release.
Hello Luke, I've prepared some patch based on Sam's suggestion: https://review.opendev.org/#/c/666528/. I'll apply it into my environment and give it a try, let's see if it solves the issue. If you could review it, that would be helpful.
Worked! "results": ["Repository 'rhel-7-server-rpms' is enabled for this system.", "Repository 'rhel-ha-for-rhel-7-server-rpms' is enabled for this system.", "Repository 'rhel-7-server-extras-rpms' is enabled for this system.", "Repository ' rhel-7-server-openstack-14-rpms' is enabled for this system.", "Repository 'rhel-7-server-rh-common-rpms' is enabled for this system."]}
Thanks Jose for that patch! I did not realize that it was a package managed by the OpenStack team. I have given your patch a +1 and hopefully it will get merged soon.
Can we move this one to MODIFIED? Was it built downstream?
Thanks for the reminder Emilien, it wasn't cherry-picked downstream yet. I did just post the patch (https://code.engineering.redhat.com/gerrit/#/c/179741/) and its waiting for merge.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3747