Bug 1561548 - FFU: updating stack outputs sporadically times out
Summary: FFU: updating stack outputs sporadically times out
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-heat
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: rc
: 13.0 (Queens)
Assignee: Zane Bitter
QA Contact: Marius Cornea
URL:
Whiteboard:
: 1579504 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-28 14:17 UTC by Marius Cornea
Modified: 2018-06-27 13:50 UTC (History)
17 users (show)

Fixed In Version: openstack-heat-10.0.1-0.20180411125640.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-06-27 13:49:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1763021 0 None None None 2018-04-11 13:05:46 UTC
OpenStack gerrit 565993 0 'None' MERGED Retry resource check if atomic key incremented 2021-01-16 01:40:05 UTC
Red Hat Product Errata RHEA-2018:2086 0 None None None 2018-06-27 13:50:26 UTC

Description Marius Cornea 2018-03-28 14:17:19 UTC
Description of problem:
FFU: updating stack outputs sporadically times out and eventually fails with:

2018-03-28 07:05:27Z [overcloud-Networks-7weodipb67gj]: UPDATE_FAILED  Resource UPDATE failed: resources.ExternalNetwork: Stack UPDATE cancelled

 Stack overcloud UPDATE_FAILED 

overcloud.Networks.ExternalNetwork:
  resource_type: OS::TripleO::Network::External
  physical_resource_id: 30c21942-83af-44e4-9d67-c1b26f13a33a
  status: UPDATE_FAILED
  status_reason: |
    resources.ExternalNetwork: Stack UPDATE cancelled


Version-Release number of selected component (if applicable):
openstack-tripleo-heat-templates-8.0.0-0.20180304031148.el7ost.noarch
puppet-heat-12.3.1-0.20180221104603.27feed4.el7ost.noarch
python-heat-agent-json-file-1.5.4-0.20180301153730.ecf43c7.el7ost.noarch
python-heat-agent-docker-cmd-1.5.4-0.20180301153730.ecf43c7.el7ost.noarch
openstack-heat-api-10.0.1-0.20180302152334.c3bd928.el7ost.noarch
openstack-heat-agents-1.5.4-0.20180301153730.ecf43c7.el7ost.noarch
openstack-heat-common-10.0.1-0.20180302152334.c3bd928.el7ost.noarch
python-heat-agent-1.5.4-0.20180301153730.ecf43c7.el7ost.noarch
python-heat-agent-hiera-1.5.4-0.20180301153730.ecf43c7.el7ost.noarch
openstack-tripleo-heat-templates-8.0.0-0.20180304031148.el7ost.noarch
python-heat-agent-ansible-1.5.4-0.20180301153730.ecf43c7.el7ost.noarch
openstack-heat-api-cfn-10.0.1-0.20180302152334.c3bd928.el7ost.noarch
heat-cfntools-1.3.0-2.el7ost.noarch
python-heat-agent-apply-config-1.5.4-0.20180301153730.ecf43c7.el7ost.noarch
python2-heatclient-1.14.0-0.20180213175737.2ce6aa1.el7ost.noarch
python-heat-agent-puppet-1.5.4-0.20180301153730.ecf43c7.el7ost.noarch
openstack-heat-engine-10.0.1-0.20180302152334.c3bd928.el7ost.noarch


How reproducible:
sporadically

Steps to Reproduce:
1. Deploy OSP10 with 3 controllers + 2 computes + 3 ceph nodes
2. Upgrade undercloud to OSP11/12/13
3. Run overcloud deploy to update the stack outputs:
#!/bin/bash
openstack overcloud deploy \
--timeout 100 \
--templates /usr/share/openstack-tripleo-heat-templates \
--stack overcloud \
--libvirt-type kvm \
--ntp-server clock.redhat.com \
--control-scale 3 \
--control-flavor controller \
--compute-scale 2 \
--compute-flavor compute \
--ceph-storage-scale 3 \
--ceph-storage-flavor ceph \
-e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml \
-e /home/stack/virt/internal.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/virt/network/network-environment.yaml \
-e /home/stack/virt/enable-tls.yaml \
-e /home/stack/virt/inject-trust-anchor.yaml \
-e /home/stack/virt/public_vip.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-ip.yaml \
-e /home/stack/virt/hostnames.yml \
-e /home/stack/virt/debug.yaml \
-e /home/stack/virt/docker-images.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/fast-forward-upgrade.yaml \
-e /home/stack/ffu_repos.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/config-download-environment.yaml \
-e /home/stack/ceph-ansible-env.yaml \


Actual results:
stack update fails after the 100m timeout:

2018-03-28 05:25:43Z [overcloud-Networks-7weodipb67gj.InternalNetwork]: UPDATE_COMPLETE  state changed
2018-03-28 07:05:27Z [Networks]: UPDATE_FAILED  UPDATE aborted (Task update from TemplateResource "Networks" [96db6ebd-335e-4a8a-a2e4-22646c2d4867] Stack "overcloud" [19781d42-3e15-478c-950b-fc9b802bebff] Timed out)
2018-03-28 07:05:27Z [overcloud-Networks-7weodipb67gj]: UPDATE_FAILED  Stack UPDATE cancelled
2018-03-28 07:05:27Z [overcloud]: UPDATE_FAILED  Timed out
2018-03-28 07:05:27Z [overcloud-Networks-7weodipb67gj-ExternalNetwork-gbbdgfnhwlb3]: UPDATE_FAILED  Stack UPDATE cancelled
2018-03-28 07:05:27Z [overcloud-Networks-7weodipb67gj.ExternalNetwork]: UPDATE_FAILED  resources.ExternalNetwork: Stack UPDATE cancelled
2018-03-28 07:05:27Z [overcloud-Networks-7weodipb67gj]: UPDATE_FAILED  Resource UPDATE failed: resources.ExternalNetwork: Stack UPDATE cancelled

 Stack overcloud UPDATE_FAILED 

overcloud.Networks.ExternalNetwork:
  resource_type: OS::TripleO::Network::External
  physical_resource_id: 30c21942-83af-44e4-9d67-c1b26f13a33a
  status: UPDATE_FAILED
  status_reason: |
    resources.ExternalNetwork: Stack UPDATE cancelled


Expected results:
stack update succeeds.

Additional info:
Attaching sosreports.

Comment 2 Marios Andreou 2018-04-02 13:05:07 UTC
please triage this we are going through the list and assigning round robin thanks (DFG:Upgrades triage call)

Comment 3 Zane Bitter 2018-04-10 23:32:01 UTC
The proximate cause appears to be this:

2018-03-28 01:25:40.572 2635 INFO heat.engine.resource [req-a5d7f3e3-0a03-4399-8174-d67bdc697d1d - admin - default default] Resource ImmutableSubnet "ExternalSubnet" [184ab5e2-f91f-4347-82d7-e2822774f72b] Stack "overcloud-Networks-7weodipb67gj-ExternalNetwork-gbbdgfnhwlb3" [30c21942-83af-44e4-9d67-c1b26f13a33a] is locked or does not exist
2018-03-28 01:25:40.573 2635 DEBUG heat.engine.resource [req-a5d7f3e3-0a03-4399-8174-d67bdc697d1d - admin - default default] Resource id:67 locked or does not exist. Expected atomic_key:2, accessing from engine_id:d0e4b9a5-cd19-4046-a823-b20fa352f7f4 _store_with_lock /usr/lib/python2.7/site-packages/heat/engine/resource.py:2128

When this happens, Heat assumes that a previous update of the resource is in progress, and takes no further action (on the assumption that the new update will be retriggered when the existing one completes). The TemplateResource that is the parent of the stack containing this missing resource is the one that times out. So it's reasonable to assume the reason is that no engine was processing a previous update (nothing like that is apparent from the logs), and so we just waited forever for a retrigger that was never going to happen.

The error we're seeing occurs when we're trying to store the resource state (which we must do to update the template ID it is associated with) upon concluding that no update is required. (There is no log message to indicate that an actual update of the resource is starting). Since the resource presumably existed in the DB when we started the update call 1s earlier:

2018-03-28 01:25:39.511 2635 DEBUG heat.engine.scheduler [req-a5d7f3e3-0a03-4399-8174-d67bdc697d1d - admin - default default] Task update from ImmutableSubnet "ExternalSubnet" [184ab5e2-f91f-4347-82d7-e2822774f72b] Stack "overcloud-Networks-7weodipb67gj-ExternalNetwork-gbbdgfnhwlb3" [30c21942-83af-44e4-9d67-c1b26f13a33a] starting start /usr/lib/python2.7/site-packages/heat/engine/scheduler.py:177
2018-03-28 01:25:39.512 2635 DEBUG heat.engine.scheduler [req-a5d7f3e3-0a03-4399-8174-d67bdc697d1d - admin - default default] Task update from ImmutableSubnet "ExternalSubnet" [184ab5e2-f91f-4347-82d7-e2822774f72b] Stack "overcloud-Networks-7weodipb67gj-ExternalNetwork-gbbdgfnhwlb3" [30c21942-83af-44e4-9d67-c1b26f13a33a] running step /usr/lib/python2.7/site-packages/heat/engine/scheduler.py:209

that leaves the likely causes as a mismatch of the expected atomic_key (2) or a mismatch of the expected engine_id (None).

The fact that the expected atomic_key is 2 is a little surprising. The default value when creating a resource pre-convergence is None and it goes from None -> 1 -> 2 -> 3 -> &c. as the resource is updated. AFAICT the process of converting the stack to convergence doesn't change the atomic_key from None. However, the first stack-show with show_outputs=True after converting to convergence writes the attribute values to the DB and increments the atomic_key. I'd have expected the expected atomic_key prior to the write to be 1, because the engine logs show this happening. (In fact the API logs show it happening twice in parallel immediately after the conversion to convergence, and although we can't know whether show_outputs was enabled both times the engine logs appear to show the atomic_key successfully preventing races between them to update many of the resources - though not the one causing the problem.)

There is a case where the atomic_key could be expected to be 2: if the resource is in a FAILED state, then we end up doing two DB writes in a row, to update the current template and change the status. (My long-proposed patch https://review.openstack.org/486267 happens to eliminate this overhead by combining them.) If it's the second one that's logging the error then we would indeed expect the atomic_key to be 2 at that point. So this requires that:

* The resource is in a FAILED state prior to the update for some reason, despite the fact that stacks are supposed to be in a COMPLETE state before we'll convert them to convergence (unlikely but not outlandish).
* Notwithstanding that, we don't need to replace the resource because needs_replace_failed() returns False (very plausible: http://git.openstack.org/cgit/openstack/heat/tree/heat/engine/resources/openstack/neutron/neutron.py?h=stable%2Fqueens#n145). 

and if we were making this calculation in-memory but the atomic_key in the database remained at 1 for some reason, that would explain our problem (although I have no idea how that could have happened).

The other obvious possibility is that something could have updated the atomic_key in the interval between when we loaded the resource and when we tried to write to the DB. I can't see anything in the log to indicate that though, nor is there any plausible candidate.

If this environment is still around (I'm assuming it's not), the output of "SELECT * FROM resource WHERE id=67;" would be very useful for figuring out exactly how the database disagrees with what heat-engine is expecting. If we can reproduce this, then substitute the ID from the log message "Resource id:67 locked or does not exist".

Comment 4 Zane Bitter 2018-04-10 23:33:41 UTC
That info from the database would be very useful if you can reproduce.

Comment 5 Marius Cornea 2018-04-11 01:35:15 UTC
(In reply to Zane Bitter from comment #4)
> That info from the database would be very useful if you can reproduce.

Thanks for detailed investigation! I don't have the environment available anymore but I'll keep it for debug next time I see this issue. I'll keep the needinfo set on me for now.

Comment 6 Zane Bitter 2018-04-11 13:05:46 UTC
I submitted a bug upstream for tracking purposes, even though we don't know enough to fix it yet.

Comment 7 Zane Bitter 2018-04-11 15:57:06 UTC
I split out the part of the patch mentioned above that combines the two writes into one. I have no idea if that will fix the issue, but it seems possible and the patch is tiny and easy to backport.

Comment 8 Marius Cornea 2018-04-14 14:56:29 UTC
I managed to reproduce this issue. I am pasting below the query result. 

I added the dump of the heat db and sosreport from the undercloud here http://file.brq.redhat.com/~mcornea/bugzilla/1561548/


MariaDB [heat]> SELECT * FROM resource WHERE id=67;
+----+--------------------------------------+--------------------------------------+----------------+---------------------+------------+--------+----------+---------------+--------------------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+------------+-----------+----------+----------+-------------+---------------------+---------------------------+--------------------------------------+-------------------+--------------+
| id | uuid                                 | nova_instance                        | name           | created_at          | updated_at | action | status   | status_reason | stack_id                             | rsrc_metadata | properties_data                                                                                                                                                                                                           | engine_id | atomic_key | needed_by | requires | replaces | replaced_by | current_template_id | properties_data_encrypted | root_stack_id                        | rsrc_prop_data_id | attr_data_id |
+----+--------------------------------------+--------------------------------------+----------------+---------------------+------------+--------+----------+---------------+--------------------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+------------+-----------+----------+----------+-------------+---------------------+---------------------------+--------------------------------------+-------------------+--------------+
| 67 | 149cba55-8a58-4896-bd9c-f05ff58f12a7 | 6d35d9ca-2a05-4ef2-bfa3-02364f1732f7 | ExternalSubnet | 2018-04-14 02:18:04 | NULL       | CREATE | COMPLETE | state changed | ff5352db-e92c-455d-b05c-c35ea1188f88 | {}            | {"name": "external_subnet", "enable_dhcp": false, "allocation_pools": [{"start": "10.0.0.101", "end": "10.0.0.149"}], "gateway_ip": "10.0.0.1", "cidr": "10.0.0.0/24", "network": "cac67449-949c-4ccb-812f-5fb8446f61f9"} | NULL      |          3 | []        | [68]     |     NULL |        NULL |                 441 |                         0 | 88d24cd3-4d0f-4ad7-ab50-b15a545859a8 |              NULL |          673 |
+----+--------------------------------------+--------------------------------------+----------------+---------------------+------------+--------+----------+---------------+--------------------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+------------+-----------+----------+----------+-------------+---------------------+---------------------------+--------------------------------------+-------------------+--------------+
1 row in set (0.00 sec)

Comment 9 Zane Bitter 2018-04-18 00:58:49 UTC
So unfortunately the log file is truncated in the reproducer's sosreport, so we can't see exactly what happened at the time of the failure. We can learn some interesting stuff from the database though:

MariaDB [bz1561548]> select id,name,action,status,engine_id,atomic_key,rsrc_prop_data_id,current_template_id,attr_data_id,requires from resource where stack_id='ff5352db-e92c-455d-b05c-c35ea1188f88';
+----+-----------------+--------+----------+-----------+------------+-------------------+---------------------+--------------+----------+
| id | name            | action | status   | engine_id | atomic_key | rsrc_prop_data_id | current_template_id | attr_data_id | requires |
+----+-----------------+--------+----------+-----------+------------+-------------------+---------------------+--------------+----------+
| 67 | ExternalSubnet  | CREATE | COMPLETE | NULL      |          3 |              NULL |                 441 |          673 | [68]     |
| 68 | ExternalNetwork | CREATE | COMPLETE | NULL      |          3 |               668 |                1328 |         NULL | []       |
+----+-----------------+--------+----------+-----------+------------+-------------------+---------------------+--------------+----------+

It appears that ExternalNetwork has been updated - it has a rsrc_prop_data_id instead of properties_data inline, which means it's been updated since convergence was enabled on the stack (i.e. after upgrading to OSP13), and it has a current_template_id that matches the raw_template_id of the stack.

MariaDB [bz1561548]> SELECT raw_template_id,prev_raw_template_id FROM stack WHERE id='ff5352db-e92c-455d-b05c-c35ea1188f88';
+-----------------+----------------------+
| raw_template_id | prev_raw_template_id |
+-----------------+----------------------+
|            1328 |                  441 |
+-----------------+----------------------+

And it appears that ExternalSubnet has *not* been updated. It doesn't have a rsrc_prop_data_id, and its current_template_id matches the prev_raw_template_id of the stack.

From this point we can only conjecture, since we don't have the logs to show exactly what went wrong. But *ASSUMING* that they would be identical to the previous case, a number of things stand out.

* It's the exact same resource failing. What is it about that resource???

* The update to change the current_template_id is the first one, and in any event the resource is not in a FAILED state. So that rules out the double-write being part of the cause, which I hypothesised in comment #3. It also rules out https://review.openstack.org/560427 being a solution.

* No engine_id is set, so as we expected the resource is not actually locked by anything.

* The atomic_key of ExternalNetwork, which has been updated, is 3 (I'm not sure where those writes all come from, but whatever). The atomic_key of ExternalSubnet, which has not been updated, is also 3 even though we'd expect it to have been written one less time. The previous time we saw this error, heat-engine was expecting an atomic_key of 2, which corresponds to our expectations when comparing it to ExternalNetwork. It sure looks like something has incremented the atomic_key on ExternalSubnet between the time when it was loaded and the time we went to update the current_template_id.

* But wait, ExternalSubnet has an attr_data_id, indicating that its attribute values that are referenced have been stored to the DB, whereas ExternalNetwork does not (since none of its attributes are referenced). That would explain the higher atomic_key - it gets incremented in the DB when we write the attr_data_id. And there's even an issue where the expected atomic_key in memory is *not* incremented to match (fix here: https://review.openstack.org/560417), so a subsequent write from the same Resource object would fail.

* That would explain everything, except that there's no way I can find that the attr_data_id gets written in the same session as the check_resource that we're doing when it failed - in fact it should have been written the first time we did show-stack, even before the update started. So we shouldn't be expecting an atomic_key of 2.

* The entire (nested) stack comprises exactly two resources, and they have a dependency relationship so they're checked serially. Both are Neutron resources so there's no signals or anything to worry about. Nothing in the parent stack should be doing anything until the update of this whole nested stack has completed. It's really difficult to imagine what could be racing with our update. A simultaneous stack-show (with output data requested) is one theoretical possibility, but it's quite incredible that it would affect the exact same resource.

A possible solution would be to look at whether the resource is actually locked when getting an UpdateInProgress exception in CheckResource._do_check_resource(), and retrigger the same check if so.

I'm setting needinfo again, because we need to see both the full log and the DB dump from the same failure to be 100% sure that things really are as strange as we think they are.

Comment 10 Marius Cornea 2018-04-18 19:02:04 UTC
OK, getting back with the needinfo from today's failure:

Log files and db dump uploaded to: http://file.brq.redhat.com/~mcornea/bugzilla//1561548/

 [root@undercloud-0 stack]# grep 'locked or does not exist' /var/log/heat/heat-engine.log
2018-04-18 12:30:09.182 20983 INFO heat.engine.resource [req-d0382c9a-c780-4069-9806-441d172c0c99 - admin - default default] Resource ImmutableSubnet "ExternalSubnet" [0098d3bb-8f87-4819-90c1-6de0085186e3] Stack "QualtiyEng-Networks-535ejryp5buz-ExternalNetwork-iazacsk5pfeq" [6b448fbf-28f2-4be9-9262-f1fcf7803ef3] is locked or does not exist
2018-04-18 12:30:09.182 20983 DEBUG heat.engine.resource [req-d0382c9a-c780-4069-9806-441d172c0c99 - admin - default default] Resource id:75 locked or does not exist. Expected atomic_key:2, accessing from engine_id:2cbc22a6-b18d-4419-be97-fdb41c6f79fa _store_with_lock /usr/lib/python2.7/site-packages/heat/engine/resource.py:2135
 [root@undercloud-0 stack]# mysql -e 'use heat; SELECT * FROM resource WHERE id=75;'
+----+--------------------------------------+--------------------------------------+----------------+---------------------+------------+--------+----------+---------------+--------------------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+------------+-----------+----------+----------+-------------+---------------------+---------------------------+--------------------------------------+-------------------+--------------+
| id | uuid                                 | nova_instance                        | name           | created_at          | updated_at | action | status   | status_reason | stack_id                             | rsrc_metadata | properties_data                                                                                                                                                                                                           | engine_id | atomic_key | needed_by | requires | replaces | replaced_by | current_template_id | properties_data_encrypted | root_stack_id                        | rsrc_prop_data_id | attr_data_id |
+----+--------------------------------------+--------------------------------------+----------------+---------------------+------------+--------+----------+---------------+--------------------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+------------+-----------+----------+----------+-------------+---------------------+---------------------------+--------------------------------------+-------------------+--------------+
| 75 | db6a070d-1d29-4fa2-9c51-cf61da67ed27 | 0098d3bb-8f87-4819-90c1-6de0085186e3 | ExternalSubnet | 2018-04-18 14:54:49 | NULL       | CREATE | COMPLETE | state changed | 6b448fbf-28f2-4be9-9262-f1fcf7803ef3 | {}            | {"name": "external_subnet", "enable_dhcp": false, "allocation_pools": [{"start": "10.0.0.101", "end": "10.0.0.149"}], "gateway_ip": "10.0.0.1", "cidr": "10.0.0.0/24", "network": "c75bffa3-f6ba-4378-bafc-c0579a68cab5"} | NULL      |          3 | []        | [76]     |     NULL |        NULL |                   9 |                         0 | 5a0edecb-b2db-4a13-b9e2-c3a69059df41 |              NULL |          416 |
+----+--------------------------------------+--------------------------------------+----------------+---------------------+------------+--------+----------+---------------+--------------------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+------------+-----------+----------+----------+-------------+---------------------+---------------------------+--------------------------------------+-------------------+--------------+
 [root@undercloud-0 stack]#

Comment 11 Zane Bitter 2018-04-19 18:55:33 UTC
Thanks so much Marius. That confirms what I suspected:

* It's the exact same resource again
* atomic_id is 3 when heat-engine is expecting 2

I still have no idea of the cause, but hopefully I can find a safe way to retry.

Comment 14 Zane Bitter 2018-04-26 00:52:14 UTC
I just noticed that in the process of converting the stack to convergence:

https://github.com/zaneb/heat-convergence-prototype/commit/c74aac1f07e3fdf1fe382a7edce6c4828eda13e3

we do two writes that each increment the atomic_key (set_needed_by() and set_requires()). So the expected values after converting to convergence would be 2, and after then showing the stack and caching attributes it would be 3. Which is exactly what we're seeing - the resource with no attributes cached has atomic_key 3 only after an update has traversed it, while the resource with cached attributes has atomic_key 3 prior to the update traversing it.

The mystery remains why we're expecting the atomic_key to be 2 for the latter resource, as if we loaded it before caching the attributes and tried to write to it after. With all of the atomic writes accounted for, this must be what is happening. Perhaps it's triggered by a show-stack (with outputs) issued by an external command in parallel with the stack update?

In any event, I think that https://review.openstack.org/564348 should prevent the update from hanging.

Comment 15 Zane Bitter 2018-05-07 14:11:51 UTC
Patch merged in upstream master and proposed to stable/queens.

Comment 22 Marius Cornea 2018-05-17 22:00:16 UTC
*** Bug 1579504 has been marked as a duplicate of this bug. ***

Comment 25 Marius Cornea 2018-05-25 17:26:27 UTC
I wasn't able to reproduce this issue with my latest upgrade attempts so I'll consider it as verified. If it reproduces I'll reopen. Thanks for the support.

Comment 27 errata-xmlrpc 2018-06-27 13:49:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:2086


Note You need to log in before you can comment on or make changes to this bug.