Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1724685

Summary: OSP 14->15: Keystone broken after controller-0 upgrade, returns 500s
Product: Red Hat OpenStack Reporter: Jiri Stransky <jstransk>
Component: openstack-tripleo-heat-templatesAssignee: Carlos Camacho <ccamacho>
Status: CLOSED ERRATA QA Contact: Sasha Smolyak <ssmolyak>
Severity: high Docs Contact:
Priority: urgent    
Version: 15.0 (Stein)CC: ccamacho, jfrancoa, lbezdick, mburns, sclewis
Target Milestone: ---Keywords: Triaged, ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-tripleo-common-10.8.1-0.20190813170455.913b8de.el8ost.noarch openstack-tripleo-heat-templates-10.6.1-0.20190815230440.9adae50.el8ost.noarch Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-03-05 11:59:10 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1727807    

Description Jiri Stransky 2019-06-27 14:27:47 UTC
Upgrading 14->15 according to our WIP guide
(https://gitlab.cee.redhat.com/osp15/osp-upgrade-el8/blob/master/README.md),
we can upgrade the 1st controller up to step 5 where it fails on Gnocchi upgrade.

The task where Ansible stops is

TASK [Debug output for task: Start containers for step 5] **********************

and the relevant piece of output is:

        "Error running ['podman', 'run', '--name', 'ceilometer_gnocchi_upgrade', '--label', 'config_id=tripleo_step5', '--label', 'container_name=ceilometer_gnocchi_upgrade', '--label', 'managed_by=paunch', '--l
abel', 'config_data={\"command\": [\"/usr/bin/bootstrap_host_exec\", \"ceilometer_agent_central\", \"su ceilometer -s /bin/bash -c \\'for n in {1..10}; do /usr/bin/ceilometer-upgrade && exit 0 || sleep 30; done;
 exit 1\\'\"], \"detach\": false, \"healthcheck\": {\"test\": \"/openstack/healthcheck\"}, \"image\": \"brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhosp15/openstack-ceilometer-central:latest\", \"net\"
: \"host\", \"privileged\": false, \"start_order\": 99, \"user\": \"root\", \"volumes\": [\"/etc/hosts:/etc/hosts:ro\", \"/etc/localtime:/etc/localtime:ro\", \"/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extra
cted:ro\", \"/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro\", \"/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\", \"/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tl
s/certs/ca-bundle.trust.crt:ro\", \"/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\", \"/dev/log:/dev/log\", \"/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\", \"/etc/puppet:/etc/puppet:ro\", \"/var/lib/c
onfig-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro\", \"/var/log/containers/ceilometer:/var/log/ceilometer:z\"]}', '--conmon-pidfile=/var/run/ceilometer_gnocchi_upgrade.pid', '--log-driver', 'json-file', 
'--log-opt', 'path=/var/log/containers/stdouts/ceilometer_gnocchi_upgrade.log', '--net=host', '--privileged=false', '--user=root', '--volume=/etc/hosts:/etc/hosts:ro', '--volume=/etc/localtime:/etc/localtime:ro'
, '--volume=/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '--volume=/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '--volume=/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/c
erts/ca-bundle.crt:ro', '--volume=/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '--volume=/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '--volume=/dev/log:/dev/log', '--vo
lume=/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '--volume=/etc/puppet:/etc/puppet:ro', '--volume=/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', '--volume=/var/log/containers/ce
ilometer:/var/log/ceilometer:z', 'brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhosp15/openstack-ceilometer-central:latest', '/usr/bin/bootstrap_host_exec', 'ceilometer_agent_central', \"su ceilometer -s
 /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade && exit 0 || sleep 30; done; exit 1'\"]. [1]",
        "stdout: "


The container logs are empty:

[root@controller-0 ~]# podman ps -a | grep 'Exited ([1-9]'
189fb0b63b67  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhosp15/openstack-ceilometer-central:latest         dumb-init --singl...  3 hours ago  Exited (1) 3 hours ago         ceilometer_gnocchi_upgrade
[root@controller-0 ~]# podman logs ceilometer_gnocchi_upgrade
[root@controller-0 ~]#

app log:

2019-06-26 12:01:34,523 [18] WARNING  oslo_config.cfg: Deprecated: Option "coordination_url" from group "storage" is deprecated. Use option "coordination_url" from group "DEFAULT".
2019-06-26 12:01:34,535 [18] INFO     gnocchi.service: Gnocchi version 4.3.3.dev4
2019-06-26 12:01:34,590 [18] INFO     gnocchi.rest.app: WSGI config used: /usr/lib/python3.6/site-packages/gnocchi/rest/api-paste.ini
2019-06-26 12:01:35,500 [19] WARNING  oslo_config.cfg: Deprecated: Option "coordination_url" from group "storage" is deprecated. Use option "coordination_url" from group "DEFAULT".
2019-06-26 12:01:35,500 [19] INFO     gnocchi.service: Gnocchi version 4.3.3.dev4
2019-06-26 12:01:35,637 [19] INFO     gnocchi.rest.app: WSGI config used: /usr/lib/python3.6/site-packages/gnocchi/rest/api-paste.ini
2019-06-26 12:01:35,881 [18] WARNING  keystonemiddleware.auth_token: AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True.
2019-06-26 12:01:37,177 [19] WARNING  keystonemiddleware.auth_token: AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True.
2019-06-26 12:04:20,157 [19] WARNING  keystonemiddleware.auth_token: Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use  memcache for the auth_token token cache by setting the memcached_servers option.
2019-06-26 12:04:20,252 [19] ERROR    keystonemiddleware.auth_token: Bad response code while validating token: 500 An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-302b0603-ee1b-438f-a127-d505963c9a16)
2019-06-26 12:04:20,253 [19] WARNING  keystonemiddleware.auth_token: Identity response: {"error": {"message": "An unexpected error prevented the server from fulfilling your request.", "code": 500, "title": "Internal Server Error"}}
2019-06-26 12:04:20,253 [19] CRITICAL keystonemiddleware.auth_token: Unable to validate token: Failed to fetch token data from identity server

statsd log looks clean:

2019-06-26 12:01:39,133 [6] WARNING  oslo_config.cfg: Deprecated: Option "coordination_url" from group "storage" is deprecated. Use option "coordination_url" from group "DEFAULT".
2019-06-26 12:01:39,133 [6] INFO     gnocchi.service: Gnocchi version 4.3.3.dev4
2019-06-26 12:01:40,465 [6] INFO     gnocchi.statsd: Started on 0.0.0.0:8125
2019-06-26 12:01:40,466 [6] INFO     gnocchi.statsd: Flush delay: 10 seconds

Repeating error in metricd log:

2019-06-26 15:19:47,565 [22] ERROR    gnocchi.cli.metricd: Error while listening for new measures notification, retrying
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/redis/connection.py", line 184, in _read_from_socket
    raise socket.error(SERVER_CLOSED_CONNECTION_ERROR)
OSError: Connection closed by server.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/gnocchi/cli/metricd.py", line 184, in _fill_sacks_to_process
    for sack in self.incoming.iter_on_sacks_to_process():
  File "/usr/lib/python3.6/site-packages/gnocchi/incoming/redis.py", line 186, in iter_on_sacks_to_process
    for message in p.listen():
  File "/usr/lib/python3.6/site-packages/redis/client.py", line 3123, in listen
    response = self.handle_message(self.parse_response(block=True))
  File "/usr/lib/python3.6/site-packages/redis/client.py", line 3036, in parse_response
    return self._execute(connection, connection.read_response)
  File "/usr/lib/python3.6/site-packages/redis/client.py", line 3013, in _execute
    return command(*args)
  File "/usr/lib/python3.6/site-packages/redis/connection.py", line 636, in read_response
    raise e
  File "/usr/lib/python3.6/site-packages/redis/connection.py", line 633, in read_response
    response = self._parser.read_response()
  File "/usr/lib/python3.6/site-packages/redis/connection.py", line 291, in read_response
    response = self._buffer.readline()
  File "/usr/lib/python3.6/site-packages/redis/connection.py", line 223, in readline
    self._read_from_socket()
  File "/usr/lib/python3.6/site-packages/redis/connection.py", line 198, in _read_from_socket
    (e.args,))
redis.exceptions.ConnectionError: Error while reading from socket: ('Connection closed by server.',)


Probably this is caused by non-working Keystone, i get 500s:

(overcloud) [stack@undercloud-0 ~]$ source overcloudrc.v3
(overcloud) [stack@undercloud-0 ~]$ openstack user list 
An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-e56403d1-f45e-4226-9ec9-86e7344a9860)

Haproxy looks ok, keystone responding:

[root@controller-0 ~]# curl 172.17.1.17:5000
{"versions": {"values": [{"status": "stable", "updated": "2018-10-15T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.11", "links": [{"href": "http://172.17.1.17:5000/v3/", "rel": "self"}]}]}}[root@controller-0 ~]# 

keystone.log has deprecation warnings but can't see anything more serious:

2019-06-27 14:22:26.119 19 WARNING py.warnings [req-38f6f6c2-88dd-4b06-bd57-8062538c88a7 e65c2e70917a41598fb069ebadc5a857 6b2971482239406c800033cdd2a7fcd1 - default default] /usr/lib/python3.6/site-packages/oslo
_policy/policy.py:695: UserWarning: Policy "identity:get_credential":"rule:admin_required" was deprecated in S in favor of "identity:get_credential":"(role:reader and system_scope:all) or user_id:%(target.creden
tial.user_id)s". Reason: As of the Stein release, the credential API now understands how to handle system-scoped tokens in addition to project-scoped tokens, making the API more accessible to users without compr
omising security or manageability for administrators. The new default policies for this API account for these changes automatically.. Either ensure your deployment is ready for the new default or copy/paste the 
deprecated policy into your policy file and maintain it manually.
  warnings.warn(deprecated_msg)

Keystone WSGI log has these errors:

[Thu Jun 27 14:08:30.110389 2019] [wsgi:error] [pid 11155] (11)Resource temporarily unavailable: [client 192.168.24.8:49896] mod_wsgi (pid=11155): Unable to connect to WSGI daemon process 'keystone' on '/var/run/wsgi.6.1.1.sock' after multiple attempts as listener backlog limit was exceeded or the socket does not exist.
[Thu Jun 27 14:08:34.512123 2019] [wsgi:error] [pid 11067] (11)Resource temporarily unavailable: [client 172.17.1.41:55650] mod_wsgi (pid=11067): Unable to connect to WSGI daemon process 'keystone' on '/var/run/wsgi.6.1.1.sock' after multiple attempts as listener backlog limit was exceeded or the socket does not exist.
[Thu Jun 27 14:08:35.158516 2019] [wsgi:error] [pid 11151] (11)Resource temporarily unavailable: [client 192.168.24.8:54742] mod_wsgi (pid=11151): Unable to connect to WSGI daemon process 'keystone' on '/var/run/wsgi.6.1.1.sock' after multiple attempts as listener backlog limit was exceeded or the socket does not exist.
[Thu Jun 27 14:08:36.194404 2019] [wsgi:error] [pid 11281] (11)Resource temporarily unavailable: [client 172.17.1.41:55648] mod_wsgi (pid=11281): Unable to connect to WSGI daemon process 'keystone' on '/var/run/wsgi.6.1.1.sock' after multiple attempts as listener backlog limit was exceeded or the socket does not exist.
[Thu Jun 27 14:08:36.194782 2019] [wsgi:error] [pid 11293] (11)Resource temporarily unavailable: [client 192.168.24.13:48878] mod_wsgi (pid=11293): Unable to connect to WSGI daemon process 'keystone' on '/var/run/wsgi.6.1.1.sock' after multiple attempts as listener backlog limit was exceeded or the socket does not exist.
[Thu Jun 27 14:08:36.195122 2019] [wsgi:error] [pid 11005] (11)Resource temporarily unavailable: [client 172.17.1.11:33840] mod_wsgi (pid=11005): Unable to connect to WSGI daemon process 'keystone' on '/var/run/wsgi.6.1.1.sock' after multiple attempts as listener backlog limit was exceeded or the socket does not exist.
[Thu Jun 27 14:08:36.266257 2019] [wsgi:error] [pid 10979] (11)Resource temporarily unavailable: [client 192.168.24.13:49398] mod_wsgi (pid=10979): Unable to connect to WSGI daemon process 'keystone' on '/var/run/wsgi.6.1.1.sock' after multiple attempts as listener backlog limit was exceeded or the socket does not exist.

Comment 4 Jiri Stransky 2019-07-01 13:09:59 UTC
Ok another interesting find:

(overcloud) [stack@undercloud-0 ~]$ openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2019-07-01T14:05:05+0000                                                                                                                                                                |
| id         | gAAAAABdGgUBc8GubF7yPmoUE6qOCwTGLKCTiGlYCg4E5AB45PQuAvxpmLCgJKpK1gvIIJrphvnNCAydsZ6LqrY49UflsrV-M7hbu4SOc2n6R7E-3Snkhxkpcywp1QczzDA7d9AsxaKs_XnkTo82NCfHAMjw8Ekd7FIyXACzLcPFD0aX18gPTYk |
| project_id | 07f954174fc348d38dabe1785ba24ad3                                                                                                                                                        |
| user_id    | 8be265d069864b3f95f5919d78526077                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ openstack token issue
An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-7620771b-12a5-4416-b9f5-898df04a35a2)
(overcloud) [stack@undercloud-0 ~]$ openstack token issue
An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-6f3a0dc7-f5a9-4c15-bcba-7e507b071524)
(overcloud) [stack@undercloud-0 ~]$ openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2019-07-01T14:05:12+0000                                                                                                                                                                |
| id         | gAAAAABdGgUIPI3rM5idJLhjUwmfXnGfN4s7-cwBFdw78YeIpHpLJcVlOhnwhtfrCbdbBJWpiENMu4AHXR5gvVFLtAi3DSDk35sz7JrLOXfpehSXnE7w0HIqNUgrLsn6F5wxi-4r0bQYu7uXRQ63nMnyE7PPQHg2Ta1qKbO6U-UWXpOYh7HiTTg |
| project_id | 07f954174fc348d38dabe1785ba24ad3                                                                                                                                                        |
| user_id    | 8be265d069864b3f95f5919d78526077                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ openstack token issue
An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-c2cca510-3dd0-4a1a-ad49-1f00a748d24d)
(overcloud) [stack@undercloud-0 ~]$ openstack token issue
An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-e927ce24-80b6-45df-9943-6b809e6176f1)
(overcloud) [stack@undercloud-0 ~]$ openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2019-07-01T14:05:18+0000                                                                                                                                                                |
| id         | gAAAAABdGgUOcKr6KcMgRBbXzKbvsbZ08tZ7WdgDFlqOQSLHiccmMGWf__UwlWXjBZH5jg-Wv2RcmAzVcyPeOlsKwJHz2YeJlC_kJnX-HQlXtgK3P88qMQQmhMkGbJdWKahgQkMFC4cAxJJEkkEW0uOBFQBP7AwITAc3mtX8I689nsuMvid-JvM |
| project_id | 07f954174fc348d38dabe1785ba24ad3                                                                                                                                                        |
| user_id    | 8be265d069864b3f95f5919d78526077                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ openstack token issue
An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-6a7ad49f-ead0-4e57-8a48-2afff3ba2421)
(overcloud) [stack@undercloud-0 ~]$ openstack token issue
An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-3867ec74-bc34-4851-841d-4eb0ad925514)
(overcloud) [stack@undercloud-0 ~]$ openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2019-07-01T14:05:24+0000                                                                                                                                                                |
| id         | gAAAAABdGgUUFicKaga6KB04nVNO5TwrOOrScVfuLBot-sbmJICHpKWBH2Jt83y1BrhKjnXtcclNI6eOfsDzWOqro1hKcWBP9jzVWmkMfi2pbAZeZiGg_hv7glZMB2XKCWJ2HfH07oG8qsGubbyBA1fGm6d2zv0Zd6P3fs8Dk6jLLc7iugu-L-I |
| project_id | 07f954174fc348d38dabe1785ba24ad3                                                                                                                                                        |
| user_id    | 8be265d069864b3f95f5919d78526077                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Looks like some sort of round-robin issue with 2 out of 3 backends being down. This is at a stage where just 1 controller is up in the upgrade, so it needs to be ensured that any round-robin balancing does not try to distribute anything to the other 2 controllers.

Comment 5 Jiri Stransky 2019-07-03 12:15:48 UTC
This is due to missing stopping tasks for old controllers. Haproxy mistakenly load-balances to old controllers too when it should be limited to the upgraded ones.

Comment 6 Jiri Stransky 2019-08-14 15:06:12 UTC
This was fixed by Carlos, adding patch links.

Comment 8 Shelley Dunne 2019-09-19 18:29:40 UTC
Re-setting Target Milestone z1 to --- to begin the 15z1 Maintenance Release.

Comment 13 Jose Luis Franco 2019-12-13 09:58:08 UTC
Working in THT package:
(undercloud) [stack@undercloud-0 ~]$ rpm -qa | grep tripleo-heat-templates                                                                                                   openstack-tripleo-heat-templates-10.6.2-0.20191202200455.41d9f8a.el8ost.noarch

 [root@undercloud-0 stack]# rpm -qa | grep tripleo-common
python3-tripleo-common-10.8.2-0.20191125220527.c2a83c1.el8ost.noarch
openstack-tripleo-common-10.8.2-0.20191125220527.c2a83c1.el8ost.noarch
openstack-tripleo-common-containers-10.8.2-0.20191125220527.c2a83c1.el8ost.noarch


2019-12-12 19:54:33 | PLAY [External upgrade step 1] *************************************************
2019-12-12 19:54:33 | Thursday 12 December 2019  19:54:09 -0500 (0:00:00.131)       0:00:03.936 *****
2019-12-12 19:54:33 | included: /usr/share/ansible/roles/tripleo-container-stop/tasks/container_stop.yaml for undercloud
2019-12-12 19:54:33 | Thursday 12 December 2019  19:54:10 -0500 (0:00:00.373)       0:00:04.310 *****
2019-12-12 19:54:33 |
2019-12-12 19:54:33 | TASK [tripleo-container-stop : Make sure the container is stopped even if container_cli do not match] ***
2019-12-12 19:54:33 | changed: [undercloud -> 192.168.24.16] => (item=controller-0) => {"ansible_loop_var": "tripleo_delegate_to_item", "changed": true, "cmd": "# We need to make sure that containers are stopped\n# as we might have different CLIs to interact with\n# them. I.e the container_cli might be setted to be podman\n# but we might have the containers running with docker.\nset -eu\nif command -v podman && podman exec aodh_api /bin/true; then\n    systemctl stop tripleo_aodh_api.service\nfi\nif type docker &> /dev/null && docker exec aodh_api /bin/true; then\n    docker stop aodh_api\nfi\n", "delta": "0:00:00.003134", "end": "2019-12-13 00:54:10.717540", "rc": 0, "start": "2019-12-13 00:54:10.714406", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [], "tripleo_delegate_to_item": "controller-0"}
2019-12-12 19:54:33 | changed: [undercloud -> 192.168.24.23] => (item=controller-1) => {"ansible_loop_var": "tripleo_delegate_to_item", "changed": true, "cmd": "# We need to make sure that containers are stopped\n# as we might have different CLIs to interact with\n# them. I.e the container_cli might be setted to be podman\n# but we might have the containers running with docker.\nset -eu\nif command -v podman && podman exec aodh_api /bin/true; then\n    systemctl stop tripleo_aodh_api.service\nfi\nif type docker &> /dev/null && docker exec aodh_api /bin/true; then\n    docker stop aodh_api\nfi\n", "delta": "0:00:00.429473", "end": "2019-12-13 00:54:11.693939", "rc": 0, "start": "2019-12-13 00:54:11.264466", "stderr": "", "stderr_lines": [], "stdout": "aodh_api", "stdout_lines": ["aodh_api"], "tripleo_delegate_to_item": "controller-1"}
2019-12-12 19:54:33 | changed: [undercloud -> 192.168.24.20] => (item=controller-2) => {"ansible_loop_var": "tripleo_delegate_to_item", "changed": true, "cmd": "# We need to make sure that containers are stopped\n# as we might have different CLIs to interact with\n# them. I.e the container_cli might be setted to be podman\n# but we might have the containers running with docker.\nset -eu\nif command -v podman && podman exec aodh_api /bin/true; then\n    systemctl stop tripleo_aodh_api.service\nfi\nif type docker &> /dev/null && docker exec aodh_api /bin/true; then\n    docker stop aodh_api\nfi\n", "delta": "0:00:00.444288", "end": "2019-12-13 00:54:12.688377", "rc": 0, "start": "2019-12-13 00:54:12.244089", "stderr": "", "stderr_lines": [], "stdout": "aodh_api", "stdout_lines": ["aodh_api"], "tripleo_delegate_to_item": "controller-2"}
2019-12-12 19:54:33 | Thursday 12 December 2019  19:54:12 -0500 (0:00:02.635)       0:00:06.946 *****
2019-12-12 19:54:33 | included: /usr/share/ansible/roles/tripleo-container-stop/tasks/container_stop.yaml for undercloud
2019-12-12 19:54:33 | Thursday 12 December 2019  19:54:13 -0500 (0:00:00.364)       0:00:07.311 *****
2019-12-12 19:54:33 |
2019-12-12 19:54:33 | TASK [tripleo-container-stop : Make sure the container is stopped even if container_cli do not match] ***
2019-12-12 19:54:33 | changed: [undercloud -> 192.168.24.16] => (item=controller-0) => {"ansible_loop_var": "tripleo_delegate_to_item", "changed": true, "cmd": "# We need to make sure that containers are stopped\n# as we might have different CLIs to interact with\n# them. I.e the container_cli might be setted to be podman\n# but we might have the containers running with docker.\nset -eu\nif command -v podman && podman exec aodh_evaluator /bin/true; then\n    systemctl stop tripleo_aodh_evaluator.service\nfi\nif type docker &> /dev/null && docker exec aodh_evaluator /bin/true; then\n    docker stop aodh_evaluator\nfi\n", "delta": "0:00:00.003249", "end": "2019-12-13 00:54:13.360661", "rc": 0, "start": "2019-12-13 00:54:13.357412", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [], "tripleo_delegate_to_item": "controller-0"}
2019-12-12 19:54:33 | changed: [undercloud -> 192.168.24.23] => (item=controller-1) => {"ansible_loop_var": "tripleo_delegate_to_item", "changed": true, "cmd": "# We need to make sure that containers are stopped\n# as we might have different CLIs to interact with\n# them. I.e the container_cli might be setted to be podman\n# but we might have the containers running with docker.\nset -eu\nif command -v podman && podman exec aodh_evaluator /bin/true; then\n    systemctl stop tripleo_aodh_evaluator.service\nfi\nif type docker &> /dev/null && docker exec aodh_evaluator /bin/true; then\n    docker stop aodh_evaluator\nfi\n", "delta": "0:00:00.476587", "end": "2019-12-13 00:54:14.037990", "rc": 0, "start": "2019-12-13 00:54:13.561403", "stderr": "", "stderr_lines": [], "stdout": "aodh_evaluator", "stdout_lines": ["aodh_evaluator"], "tripleo_delegate_to_item": "controller-1"}
2019-12-12 19:54:33 | changed: [undercloud -> 192.168.24.20] => (item=controller-2) => {"ansible_loop_var": "tripleo_delegate_to_item", "changed": true, "cmd": "# We need to make sure that containers are stopped\n# as we might have different CLIs to interact with\n# them. I.e the container_cli might be setted to be podman\n# but we might have the containers running with docker.\nset -eu\nif command -v podman && podman exec aodh_evaluator /bin/true; then\n    systemctl stop tripleo_aodh_evaluator.service\nfi\nif type docker &> /dev/null && docker exec aodh_evaluator /bin/true; then\n    docker stop aodh_evaluator\nfi\n", "delta": "0:00:00.383456", "end": "2019-12-13 00:54:14.587164", "rc": 0, "start": "2019-12-13 00:54:14.203708", "stderr": "", "stderr_lines": [], "stdout": "aodh_evaluator", "stdout_lines": ["aodh_evaluator"], "tripleo_delegate_to_item": "controller-2"}
2019-12-12 19:54:33 | Thursday 12 December 2019  19:54:14 -0500 (0:00:01.528)       0:00:08.839 *****
2019-12-12 19:54:33 | included: /usr/share/ansible/roles/tripleo-container-stop/tasks/container_stop.yaml for undercloud
2019-12-12 19:54:33 | Thursday 12 December 2019  19:54:15 -0500 (0:00:00.353)       0:00:09.192 *****
2019-12-12 19:54:33 |
2019-12-12 19:54:33 | TASK [tripleo-container-stop : Make sure the container is stopped even if container_cli do not match] ***
2019-12-12 19:54:33 | changed: [undercloud -> 192.168.24.16] => (item=controller-0) => {"ansible_loop_var": "tripleo_delegate_to_item", "changed": true, "cmd": "# We need to make sure that containers are stopped\n# as we might have different CLIs to interact with\n# them. I.e the container_cli might be setted to be podman\n# but we might have the containers running with docker.\nset -eu\nif command -v podman && podman exec aodh_listener /bin/true; then\n    systemctl stop tripleo_aodh_listener.service\nfi\nif type docker &> /dev/null && docker exec aodh_listener /bin/true; then\n    docker stop aodh_listener\nfi\n", "delta": "0:00:00.002851", "end": "2019-12-13 00:54:15.255944", "rc": 0, "start": "2019-12-13 00:54:15.253093", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [], "tripleo_delegate_to_item": "controller-0"}
2019-12-12 19:54:33 | changed: [undercloud -> 192.168.24.23] => (item=controller-1) => {"ansible_loop_var": "tripleo_delegate_to_item", "changed": true, "cmd": "# We need to make sure that containers are stopped\n# as we might have different CLIs to interact with\n# them. I.e the container_cli might be setted to be podman\n# but we might have the containers running with docker.\nset -eu\nif command -v podman && podman exec aodh_listener /bin/true; then\n    systemctl stop tripleo_aodh_listener.service\nfi\nif type docker &> /dev/null && docker exec aodh_listener /bin/true; then\n    docker stop aodh_listener\nfi\n", "delta": "0:00:10.244591", "end": "2019-12-13 00:54:25.684357", "rc": 0, "start": "2019-12-13 00:54:15.439766", "stderr": "", "stderr_lines": [], "stdout": "aodh_listener", "stdout_lines": ["aodh_listener"], "tripleo_delegate_to_item": "controller-1"}
2019-12-12 19:54:33 | changed: [undercloud -> 192.168.24.20] => (item=controller-2) => {"ansible_loop_var": "tripleo_delegate_to_item", "changed": true, "cmd": "# We need to make sure that containers are stopped\n# as we might have different CLIs to interact with\n# them. I.e the container_cli might be setted to be podman\n# but we might have the containers running with docker.\nset -eu\nif command -v podman && podman exec aodh_listener /bin/true; then\n    systemctl stop tripleo_aodh_listener.service\nfi\nif type docker &> /dev/null && docker exec aodh_listener /bin/true; then\n    docker stop aodh_listener\nfi\n", "delta": "0:00:01.857296", "end": "2019-12-13 00:54:27.728160", "rc": 0, "start": "2019-12-13 00:54:25.870864", "stderr": "", "stderr_lines": [], "stdout": "aodh_listener", "stdout_lines": ["aodh_listener"], "tripleo_delegate_to_item": "controller-2"}
2019-12-12 19:54:33 | Thursday 12 December 2019  19:54:27 -0500 (0:00:12.800)       0:00:21.993 *****
2019-12-12 19:54:33 | included: /usr/share/ansible/roles/tripleo-container-stop/tasks/container_stop.yaml for undercloud
2019-12-12 19:54:33 | Thursday 12 December 2019  19:54:28 -0500 (0:00:00.360)       0:00:22.354 *****
2019-12-12 19:54:33 |
2019-12-12 19:54:33 | TASK [tripleo-container-stop : Make sure the container is stopped even if container_cli do not match] ***
2019-12-12 19:54:33 | changed: [undercloud -> 192.168.24.16] => (item=controller-0) => {"ansible_loop_var": "tripleo_delegate_to_item", "changed": true, "cmd": "# We need to make sure that containers are stopped\n# as we might have different CLIs to interact with\n# them. I.e the container_cli might be setted to be podman\n# but we might have the containers running with docker.\nset -eu\nif command -v podman && podman exec aodh_notifier /bin/true; then\n    systemctl stop tripleo_aodh_notifier.service\nfi\nif type docker &> /dev/null && docker exec aodh_notifier /bin/true; then\n    docker stop aodh_notifier\nfi\n", "delta": "0:00:00.003380", "end": "2019-12-13 00:54:28.437905", "rc": 0, "start": "2019-12-13 00:54:28.434525", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [], "tripleo_delegate_to_item": "controller-0"}
2019-12-12 19:54:33 | changed: [undercloud -> 192.168.24.23] => (item=controller-1) => {"ansible_loop_var": "tripleo_delegate_to_item", "changed": true, "cmd": "# We need to make sure that containers are stopped\n# as we might have different CLIs to interact with\n# them. I.e the container_cli might be setted to be podman\n# but we might have the containers running with docker.\nset -eu\nif command -v podman && podman exec aodh_notifier /bin/true; then\n    systemctl stop tripleo_aodh_notifier.service\nfi\nif type docker &> /dev/null && docker exec aodh_notifier /bin/true; then\n    docker stop aodh_notifier\nfi\n", "delta": "0:00:01.966848", "end": "2019-12-13 00:54:30.609018", "rc": 0, "start": "2019-12-13 00:54:28.642170", "stderr": "", "stderr_lines": [], "stdout": "aodh_notifier", "stdout_lines": ["aodh_notifier"], "tripleo_delegate_to_item": "controller-1"}

Comment 14 Alex McLeod 2020-02-19 12:44:04 UTC
If this bug requires doc text for errata release, please set the 'Doc Type' and provide draft text according to the template in the 'Doc Text' field. The documentation team will review, edit, and approve the text.

If this bug does not require doc text, please set the 'requires_doc_text' flag to '-'.

Comment 16 errata-xmlrpc 2020-03-05 11:59:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0643

Comment 17 Red Hat Bugzilla 2023-09-18 00:16:39 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days