Bug 1738830
| Summary: | [OSP 16] disconnect_volume errors during post_live_migration result in the overall failure of the migration despite the instance running on the destination. | |||
|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Eduard Barrera <ebarrera> | |
| Component: | openstack-nova | Assignee: | Lee Yarwood <lyarwood> | |
| Status: | CLOSED ERRATA | QA Contact: | OSP DFG:Compute <osp-dfg-compute> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | medium | |||
| Version: | 13.0 (Queens) | CC: | abishop, dasmith, eglynn, gkadam, igarciam, jhakimra, kchamart, ltamagno, lyarwood, pkopec, sbauza, sgordon, vromanso | |
| Target Milestone: | rc | Keywords: | Patch, Triaged, ZStream | |
| Target Release: | 16.0 (Train on RHEL 8.1) | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | openstack-nova-20.0.1-0.20191031221638.6aa7d00.el8ost | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1767917 (view as bug list) | Environment: | ||
| Last Closed: | 2020-02-06 14:41:56 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1767917, 1767925, 1767928 | |||
|
Description
Eduard Barrera
2019-08-08 09:01:37 UTC
The timeline looks like this:
1. The instance was successfully migrated from compute-12 to compute-13 around
2019-08-06 11:52, finishing at 11:52. Here's the compute-12 log entry:
2019-08-06 11:53:34.496 1 INFO nova.compute.manager [req-43db7a44-ddaa-4ef7-b9d9-09549d21a74a ba51bfd923a0480188bf8ea1d22f4643 69875d3dd06b4d9084d0811758e0efcc - default default] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] Migrating instance to overcloud-compute-13.XXX finished successfully.
2. compute-12 was rebooted
2019-08-06 12:59:52.433 1 INFO nova.service [-] Starting compute node (version 17.0.10-1.el7ost)
3. Migration from compute-13 back to compute-12 was initiated around 13:20:24,
but fails due to message timeout in pre_live_migration
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [req-7d55a61f-2eb9-47a1-b379-d1f191def162 ba51bfd923a0480188bf8ea1d22f4643 69875d3dd06b4d9084d0811758e0efcc - default default] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] Pre live migration failed at overcloud-compute-12.XXX: MessagingTimeout: Timed out waiting for a reply to message ID 66a8bb747e8f432ab4405bdee64bd4ca
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] Traceback (most recent call last):
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6152, in _do_live_migration
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] block_migration, disk, dest, migrate_data)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] File "/usr/lib/python2.7/site-packages/nova/compute/rpcapi.py", line 798, in pre_live_migration
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] disk=disk, migrate_data=migrate_data)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 174, in call
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] retry=self.retry)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 131, in _send
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] timeout=timeout, retry=retry)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 559, in send
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] retry=retry)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 548, in _send
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] result = self._waiter.wait(msg_id, timeout)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 440, in wait
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] message = self.waiters.get(msg_id, timeout=timeout)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 328, in get
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] 'to message ID %s' % msg_id)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] MessagingTimeout: Timed out waiting for a reply to message ID 66a8bb747e8f432ab4405bdee64bd4ca
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]
compute-13 performs a rollback (I don't know the details of what this entails)
4. At 13:27:22 the instance is started on compute-12:
2019-08-06 13:27:22.285 1 INFO nova.compute.manager [req-f8ffa3be-9569-4fae-abaa-139bf75634f6 - - - - -] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] VM Started (Lifecycle Event)
2019-08-06 13:28:14.056 1 WARNING nova.compute.resource_tracker [req-3b9242ab-ad73-454d-b72f-dc67ac80a3f7 - - - - -] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] Instance not resizing, skipping migration.
2019-08-06 13:28:14.125 1 WARNING nova.compute.resource_tracker [req-3b9242ab-ad73-454d-b72f-dc67ac80a3f7 - - - - -] Instance 3c607542-b913-4903-bc8c-ed226c6f4d7c has been moved to another host overcloud-compute-13.XXX(overcloud-compute-13.XXX). There are allocations remaining against the source host that might need to be removed: {u'resources': {u'VCPU': 8, u'MEMORY_MB': 65536, u'DISK_GB': 100}}.
5. And stopped on compute-13
2019-08-06 13:28:19.268 1 INFO nova.compute.manager [req-924f52cc-6753-4c47-baee-b89250569168 - - - - -] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] VM Paused (Lifecycle Event)
2019-08-06 13:28:19.356 1 INFO nova.compute.manager [req-924f52cc-6753-4c47-baee-b89250569168 - - - - -] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] During sync_power_state the instance has a pending task (migrating). Skip.
2019-08-06 13:28:20.120 1 INFO nova.virt.libvirt.driver [req-271dfc66-5ecf-480b-a10c-df616649d691 ba51bfd923a0480188bf8ea1d22f4643 69875d3dd06b4d9084d0811758e0efcc - default default] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] Migration operation has completed
6. compute-13 fails during post_live_migration
2019-08-06 13:28:20.121 1 INFO nova.compute.manager [req-271dfc66-5ecf-480b-a10c-df616649d691 ba51bfd923a0480188bf8ea1d22f4643 69875d3dd06b4d9084d0811758e0efcc - default default] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] _post_live_migration() is started..
This quickly leads to the error shown in comment #1. The post migration code
attempts to disconnect the volume, but this fails when os-brick's
"multipath -f 3600a098038304769735d4e3735632f4c" fails:
Aug 06 13:28:40 | /etc/multipath.conf does not exist, blacklisting all devices.
Aug 06 13:28:40 | A default multipath.conf file is located at
Aug 06 13:28:40 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf
Aug 06 13:28:40 | You can run /sbin/mpathconf --enable to create
Aug 06 13:28:40 | /etc/multipath.conf. See man mpathconf(8) for more details
Aug 06 13:28:40 | 3600a098038304769735d4e3735632f4c: map in use
Aug 06 13:28:40 | failed to remove multipath map 3600a098038304769735d4e3735632f4c
The warnings about the missing /etc/multipath.conf file are a side effect of
services running in a container, but it's not the fatal part of the
problem. The fatal part is this:
Aug 06 13:28:40 | 3600a098038304769735d4e3735632f4c: map in use
Aug 06 13:28:40 | failed to remove multipath map 3600a098038304769735d4e3735632f4c
This is likely due to the cinder-volume service previously terminating the
compute-13 connection to the volume. We'd need to see the cinder logs to trace
that activity but, regardless, the request to terminate the connection
originates from nova.
I don't see any issues with cinder or os-brick, so I'd like DFG:Compute to
take a look.
The cinder logs don't reveal anything immediately noteworthy (no errors). Debug logs were not enabled at the time the migration failed, so it may not be possible to know exactly what happened. The only thing I've seen so far is the nova RPC timeout in comment #3, which is why I asked nova folks to take a look. Hi, There is another case that has a close error. After migration openstack server migrate 166ec0bc-def7-48e7-a05c-7fb9589b09bd --live cmpcXX migrate VM id:166ec0bc-def7-48e7-a05c-7fb9589b09bd req-id: req-9ef5484b-3b47-4812-9aed-40f0e0aba00e nova-compute.log on compute node: 2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [req-35714866-66d7-4acd-9d3e-765f5709aee5 - - - - -] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] An error occurred while refreshing the network cache.: ConnectTimeout: Request to http://10.30.186.9:9696/v2.0/networks?id=89c954fd-8358-4d53-8f24-015610c0f6c8 timed out 2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Traceback (most recent call last): 2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6798, in _heal_instance_info_cache 2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] self.network_api.get_instance_nw_info(context, instance) [...] 2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] resp = send(**kwargs) 2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 763, in _send_request 2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] raise exceptions.ConnectTimeout(msg) 2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] ConnectTimeout: Request to http://10.30.186.9:9696/v2.0/networks?id=89c954fd-8358-4d53-8f24-015610c0f6c8 timed out 2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] 2019-09-23 12:45:12.631 1 WARNING nova.compute.manager [req-a0ecad42-2d88-4cb0-9e3e-4d823a835982 555af911f2714f4c9e551421d3d7bbf4 7cc52c5f2cd5407498b3d90eeb25d968 - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Received unexpected event network-vif-unplugged-8bdf4230-6623-42e9-9ed7-0f4fa2092e78 for instance with vm_state active and task_state migrating. 2019-09-23 12:45:12.896 1 WARNING nova.compute.manager [req-ac2ba14f-48c0-4be7-8ef0-c49a059e325d 555af911f2714f4c9e551421d3d7bbf4 7cc52c5f2cd5407498b3d90eeb25d968 - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Received unexpected event network-vif-plugged-8bdf4230-6623-42e9-9ed7-0f4fa2092e78 for instance with vm_state active and task_state migrating. 2019-09-23 12:45:15.348 1 WARNING nova.compute.manager [req-3929fd35-484a-4b5f-8d57-3cfe6acab806 555af911f2714f4c9e551421d3d7bbf4 7cc52c5f2cd5407498b3d90eeb25d968 - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Received unexpected event network-vif-plugged-8bdf4230-6623-42e9-9ed7-0f4fa2092e78 for instance with vm_state active and task_state migrating. 2019-09-23 12:45:16.739 1 INFO nova.virt.libvirt.migration [req-9ef5484b-3b47-4812-9aed-40f0e0aba00e 9c56624fcfe04036a55eb79bcb699313 21cbcbde105e4d25bbf8b87763def29a - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Increasing downtime to 50 ms after 0 sec elapsed time 2019-09-23 12:45:16.870 1 INFO nova.virt.libvirt.driver [req-9ef5484b-3b47-4812-9aed-40f0e0aba00e 9c56624fcfe04036a55eb79bcb699313 21cbcbde105e4d25bbf8b87763def29a - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Migration running for 0 secs, memory 100% remaining; (bytes processed=0, remaining=0, total=0) 2019-09-23 12:45:17.255 1 WARNING nova.compute.manager [req-882e45dc-6bd8-4466-80d2-ef8d6ee9253c 555af911f2714f4c9e551421d3d7bbf4 7cc52c5f2cd5407498b3d90eeb25d968 - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Received unexpected event network-vif-plugged-8bdf4230-6623-42e9-9ed7-0f4fa2092e78 for instance with vm_state active and task_state migrating. 2019-09-23 12:45:18.776 1 WARNING nova.compute.manager [req-6e806121-3573-4413-859d-40e5f8d5e34d 555af911f2714f4c9e551421d3d7bbf4 7cc52c5f2cd5407498b3d90eeb25d968 - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Received unexpected event network-vif-plugged-8bdf4230-6623-42e9-9ed7-0f4fa2092e78 for instance with vm_state active and task_state migrating. 2019-09-23 12:45:25.951 1 WARNING nova.compute.resource_tracker [req-35714866-66d7-4acd-9d3e-765f5709aee5 - - - - -] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Instance not resizing, skipping migration. 2019-09-23 12:45:35.002 1 INFO nova.compute.manager [req-a9ce38a5-4250-49cc-84cf-a67606c853e7 - - - - -] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] VM Paused (Lifecycle Event) 2019-09-23 12:45:35.083 1 INFO nova.compute.manager [req-a9ce38a5-4250-49cc-84cf-a67606c853e7 - - - - -] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] During sync_power_state the instance has a pending task (migrating). Skip. 2019-09-23 12:45:35.511 1 INFO nova.virt.libvirt.driver [req-9ef5484b-3b47-4812-9aed-40f0e0aba00e 9c56624fcfe04036a55eb79bcb699313 21cbcbde105e4d25bbf8b87763def29a - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Migration operation has completed 2019-09-23 12:45:35.513 1 INFO nova.compute.manager [req-9ef5484b-3b47-4812-9aed-40f0e0aba00e 9c56624fcfe04036a55eb79bcb699313 21cbcbde105e4d25bbf8b87763def29a - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] _post_live_migration() is started.. 2019-09-23 12:45:50.511 1 INFO nova.compute.manager [-] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] VM Stopped (Lifecycle Event) 2019-09-23 12:46:28.402 1 WARNING nova.compute.resource_tracker [req-35714866-66d7-4acd-9d3e-765f5709aee5 - - - - -] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Instance not resizing, skipping migration. 2019-09-23 12:46:38.263 1 WARNING nova.virt.libvirt.driver [req-9ef5484b-3b47-4812-9aed-40f0e0aba00e 9c56624fcfe04036a55eb79bcb699313 21cbcbde105e4d25bbf8b87763def29a - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Error monitoring migration: Unexpected error while running command. 2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Traceback (most recent call last): 2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7420, in _live_migration [...] 2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] ProcessExecutionError: Unexpected error while running command. 2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Command: multipath -f /dev/disk/by-id/dm-uuid-mpath-360060e8012a705005040a70500000366 2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Exit code: 1 2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Stdout: u'Sep 23 12:45:58 | /etc/multipath.conf does not exist, blacklisting all devices.\nSep 23 12:45:58 | A default multipath.conf file is located at\nSep 23 12:45:58 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf\nSep 23 12:45:58 | You can run /sbin/mpathconf --enable to create\nSep 23 12:45:58 | /etc/multipath.conf. See man mpathconf(8) for more details\nSep 23 12:45:58 | 360060e8012a705005040a70500000366p1: map in use\nSep 23 12:45:58 | failed to remove multipath map /dev/disk/by-id/dm-uuid-mpath-360060e8012a705005040a70500000366\n' 2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Stderr: u'' 2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] 2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [req-9ef5484b-3b47-4812-9aed-40f0e0aba00e 9c56624fcfe04036a55eb79bcb699313 21cbcbde105e4d25bbf8b87763def29a - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Live migration failed.: ProcessExecutionError: Unexpected error while running command. 2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Traceback (most recent call last): 2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6192, in _do_live_migration [...] 2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] ProcessExecutionError: Unexpected error while running command. 2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Command: multipath -f /dev/disk/by-id/dm-uuid-mpath-360060e8012a705005040a70500000366 2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Exit code: 1 2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Stdout: u'Sep 23 12:45:58 | /etc/multipath.conf does not exist, blacklisting all devices.\nSep 23 12:45:58 | A default multipath.conf file is located at\nSep 23 12:45:58 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf\nSep 23 12:45:58 | You can run /sbin/mpathconf --enable to create\nSep 23 12:45:58 | /etc/multipath.conf. See man mpathconf(8) for more details\nSep 23 12:45:58 | 360060e8012a705005040a70500000366p1: map in use\nSep 23 12:45:58 | failed to remove multipath map /dev/disk/by-id/dm-uuid-mpath-360060e8012a705005040a70500000366\n' 2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Stderr: u'' 2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:0283 |