Bug 1486329 - Failed to install overcloud - image rhosp12/openstack-cron-docker not found
Summary: Failed to install overcloud - image rhosp12/openstack-cron-docker not found
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-containers
Version: 12.0 (Pike)
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: beta
: 12.0 (Pike)
Assignee: Dan Prince
QA Contact: Omri Hochman
Andrew Burden
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-08-29 13:40 UTC by Yurii Prokulevych
Modified: 2023-02-22 23:02 UTC (History)
14 users (show)

Fixed In Version: openstack-cron-docker-12.0-20170830.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-13 19:15:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:3457 0 normal SHIPPED_LIVE Red Hat OpenStack Platform 12.0 Containers Enhancement Advisory 2017-12-14 04:45:51 UTC

Description Yurii Prokulevych 2017-08-29 13:40:57 UTC
Description of problem:
-----------------------
Attempt to install 'latest' build failed

openstack overcloud deploy \
--templates /usr/share/openstack-tripleo-heat-templates \
--libvirt-type kvm \
--ntp-server clock.redhat.com \
-e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml \
-e /home/stack/virt/internal.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/virt/network/network-environment.yaml \
-e /home/stack/virt/enable-tls.yaml \
-e /home/stack/virt/inject-trust-anchor.yaml \
-e /home/stack/virt/public_vip.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-ip.yaml \
-e /home/stack/virt/hostnames.yml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \
-e /home/stack/virt/debug.yaml \
-e /home/stack/virt/nodes_data.yaml \
-e /home/stack/virt/docker-images.yam
...
2017-08-29 13:08:10Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1.0]: CREATE_IN_PROGRESS  state changed
2017-08-29 13:08:10Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step1]: CREATE_COMPLETE  state changed
2017-08-29 13:08:10Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step1]: CREATE_COMPLETE  state changed
2017-08-29 13:08:10Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1.2]: CREATE_IN_PROGRESS  state changed
2017-08-29 13:08:37Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1.0]: SIGNAL_IN_PROGRESS  Signal: deployment d5887ccd-964b-4373-93ad-6a07b7907dce failed (2)
2017-08-29 13:08:38Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1.0]: CREATE_FAILED  Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status co
de: 2
2017-08-29 13:08:38Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1.1]: CREATE_FAILED  CREATE aborted
2017-08-29 13:08:38Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1.2]: CREATE_FAILED  CREATE aborted
2017-08-29 13:08:38Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1]: CREATE_FAILED  Resource CREATE failed: Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited w
ith non-zero status code: 2
2017-08-29 13:08:38Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1]: CREATE_FAILED  Error: resources.CephStorageDeployment_Step1.resources[0]: Deployment to server failed: deploy_status_code: Deploy
ment exited with non-zero status code: 2
2017-08-29 13:08:39Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step1]: CREATE_FAILED  CREATE aborted
2017-08-29 13:08:39Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1]: CREATE_FAILED  CREATE aborted
2017-08-29 13:08:39Z [overcloud.AllNodesDeploySteps]: CREATE_FAILED  Resource CREATE failed: Error: resources.CephStorageDeployment_Step1.resources[0]: Deployment to server failed: deploy_status_code: Deployment
 exited with non-zero status code: 2
2017-08-29 13:08:39Z [1]: CREATE_FAILED  CREATE aborted
2017-08-29 13:08:39Z [0]: CREATE_FAILED  CREATE aborted
2017-08-29 13:08:39Z [2]: CREATE_FAILED  CREATE aborted
2017-08-29 13:08:39Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1]: CREATE_FAILED  Resource CREATE failed: Operation cancelled
2017-08-29 13:08:39Z [overcloud.AllNodesDeploySteps]: CREATE_FAILED  Error: resources.AllNodesDeploySteps.resources.CephStorageDeployment_Step1.resources[0]: Deployment to server failed: deploy_status_code: Depl
oyment exited with non-zero status code: 2
2017-08-29 13:08:39Z [overcloud]: CREATE_FAILED  Resource CREATE failed: Error: resources.AllNodesDeploySteps.resources.CephStorageDeployment_Step1.resources[0]: Deployment to server failed: deploy_status_code: 
Deployment exited with non-zero status code: 2
2017-08-29 13:08:40Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step1.1]: CREATE_FAILED  CREATE aborted
2017-08-29 13:08:40Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step1.0]: CREATE_FAILED  CREATE aborted
2017-08-29 13:08:40Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step1]: CREATE_FAILED  Resource CREATE failed: Operation cancelled

 Stack overcloud CREATE_FAILED 
overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1.1:
  resource_type: OS::Heat::StructuredDeployment
  physical_resource_id: 2a5ce57f-c338-410a-849b-1c2eaf36a3c7
  status: CREATE_FAILED
  status_reason: |
    CREATE aborted
  deploy_stdout: |
None
  deploy_stderr: |
None
overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1.0:
  resource_type: OS::Heat::StructuredDeployment
  physical_resource_id: d5887ccd-964b-4373-93ad-6a07b7907dce
  status: CREATE_FAILED
  status_reason: |
    Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 2
  deploy_stdout: |
    ...
        ], 
        "changed": false, 
        "failed": true, 
        "failed_when_result": true
    }
        to retry, use: --limit @/var/lib/heat-config/heat-config-ansible/210676fa-dddc-4ef5-ac42-d11c45f7edab_playbook.retry
    
    PLAY RECAP *********************************************************************
    localhost                  : ok=5    changed=1    unreachable=0    failed=1   
    
    (truncated, view all with --long)
  deploy_stderr: |

overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1.2:
  resource_type: OS::Heat::StructuredDeployment
  physical_resource_id: e251d728-1a9d-4524-9c65-60731f9f2b10
  status: CREATE_FAILED
  status_reason: |
    CREATE aborted
  deploy_stdout: |

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
openstack-tripleo-heat-templates-7.0.0-0.20170821194253.el7ost.noarch
container-storage-setup-0.3.0-3.git927974f.el7.noarch
openstack-tripleo-common-containers-7.4.1-0.20170818153039.7d74e83.el7ost.noarch
skopeo-containers-0.1.20-2.1.gite802625.el7.x86_64
container-selinux-2.21-1.el7.noarch
openstack-swift-container-2.15.2-0.20170821181730.c54c6b3.el7ost.noarch
subscription-manager-plugin-container-1.19.21-1.el7.x86_64

Steps to Reproduce:
1. Install oc with command from 'Description of problem' section


Additional info:
----------------
The culprit seems to be missing openstack-cron-docker image.

Comment 3 Dan Prince 2017-08-29 16:01:02 UTC
As a work around I think you could set the DockerCrondConfigImage heat image to anything really. It just runs cron which is in all base images. The *actual* logrotate cron job will fail but that will occur much later allowing you to proceed with other CI test.

The real fix is to get the new cron image backported downstream.

So how about this:

DockerCrondConfigImage: openstack-keystone-docker:2017-08-18.2

Comment 4 Dan Yasny 2017-08-29 18:54:57 UTC
(In reply to Dan Prince from comment #3)
> As a work around I think you could set the DockerCrondConfigImage heat image
> to anything really. It just runs cron which is in all base images. The
> *actual* logrotate cron job will fail but that will occur much later
> allowing you to proceed with other CI test.
> 
> The real fix is to get the new cron image backported downstream.
> 
> So how about this:
> 
> DockerCrondConfigImage: openstack-keystone-docker:2017-08-18.2

Just tried it and deploy failed.

Comment 5 Dan Yasny 2017-08-29 19:21:40 UTC
With the workaround enabled: 

Error running ['docker', 'run', '--name', 'mysql_bootstrap', '--label', 'config_id=tripleo_step1', '--label', 'container_name=mysql_bootstrap', '--label', 'managed_by=paunch', '--label', 'config_data={\\"environment\\": [\\"KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\\", \\"KOLLA_BOOTSTRAP=True\\", \\"KOLLA_KUBERNETES=True\\", \\"DB_MAX_TIMEOUT=60\\", \\"DB_CLUSTERCHECK_PASSWORD=tbMPbVBqWYJCEdp4wkZtBus34\\", \\"DB_ROOT_PASSWORD=2x1DFnpXvM\\"], \\"start_order\\": 1, \\"command\\": [\\"bash\\", \\"-ec\\", \\"if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\\\\nkolla_start\\\\nmysqld_safe --skip-networking --wsrep-on=OFF &\\\\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \\'until mysqladmin -uroot -p\\\\\\"${DB_ROOT_PASSWORD}\\\\\\" ping 2>/dev/null; do sleep 1; done\\'\\\\nmysql -uroot -p\\\\\\"${DB_ROOT_PASSWORD}\\\\\\" -e \\\\\\"CREATE USER \\'clustercheck\\'@\\'localhost\\' IDENTIFIED BY \\'${DB_CLUSTERCHECK_PASSWORD}\\';\\\\\\"\\\\nmysql -uroot -p\\\\\\"${DB_ROOT_PASSWORD}\\\\\\" -e \\\\\\"GRANT PROCESS ON container_images.yaml debug.yaml deploy_info.sh instackenv.json oc_ironic.yaml openstack_failures_long.log overcloud_deployment_53.log overcloud_deploy.sh overcloud_install.log undercloud.conf undercloud_deploy.sh undercloud_install.log undercloud-passwords.conf TO \\'clustercheck\\'@\\'localhost\\' WITH GRANT OPTION;\\\\\\"\\\\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\\\\\\"${DB_ROOT_PASSWORD}\\\\\\" shutdown\\"], \\"volumes\\": [\\"/etc/hosts:/etc/hosts:ro\\", \\"/etc/localtime:/etc/localtime:ro\\", \\"/etc/puppet:/etc/puppet:ro\\", \\"/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\\", \\"/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\\", \\"/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\\", \\"/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\\", \\"/dev/log:/dev/log\\", \\"/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\\", \\"/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json\\", \\"/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro\\", \\"/var/lib/mysql:/var/lib/mysql\\"], \\"image\\": \\"192.168.24.1:8787/rhosp12/openstack-mariadb-docker:2017-08-28.10\\", \\"detach\\": false, \\"net\\": \\"host\\"}', '--env=KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', '--env=KOLLA_BOOTSTRAP=True', '--env=KOLLA_KUBERNETES=True', '--env=DB_MAX_TIMEOUT=60', '--env=DB_CLUSTERCHECK_PASSWORD=tbMPbVBqWYJCEdp4wkZtBus34', '--env=DB_ROOT_PASSWORD=2x1DFnpXvM', '--net=host', '--volume=/etc/hosts:/etc/hosts:ro', '--volume=/etc/localtime:/etc/localtime:ro', '--volume=/etc/puppet:/etc/puppet:ro', '--volume=/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '--volume=/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '--volume=/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '--volume=/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '--volume=/dev/log:/dev/log', '--volume=/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '--volume=/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', '--volume=/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', '--volume=/var/lib/mysql:/var/lib/mysql', '192.168.24.1:8787/rhosp12/openstack-mariadb-docker:2017-08-28.10', 'bash', '-ec', 'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\\nkolla_start\\nmysqld_safe --skip-networking --wsrep-on=OFF &\\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \\'until mysqladmin -uroot -p\\"${DB_ROOT_PASSWORD}\\" ping 2>/dev/null; do sleep 1; done\\'\\nmysql -uroot -p\\"${DB_ROOT_PASSWORD}\\" -e \\"CREATE USER \\'clustercheck\\'@\\'localhost\\' IDENTIFIED BY \\'${DB_CLUSTERCHECK_PASSWORD}\\';\\"\\nmysql -uroot -p\\"${DB_ROOT_PASSWORD}\\" -e \\"GRANT PROCESS ON container_images.yaml debug.yaml deploy_info.sh instackenv.json oc_ironic.yaml openstack_failures_long.log overcloud_deployment_53.log overcloud_deploy.sh overcloud_install.log undercloud.conf undercloud_deploy.sh undercloud_install.log undercloud-passwords.conf TO \\'clustercheck\\'@\\'localhost\\' WITH GRANT OPTION;\\"\\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\\"${DB_ROOT_PASSWORD}\\" shutdown']. [124]\", 


Full deployment show output:
http://file.rdu.redhat.com/~dyasny/logs/debug.txt

Comment 6 Alexander Chuzhoy 2017-08-29 19:40:34 UTC
(In reply to Dan Yasny from comment #5)
> With the workaround enabled: 
> 
> Error running ['docker', 'run', '--name', 'mysql_bootstrap', '--label',
> 'config_id=tripleo_step1', '--label', 'container_name=mysql_bootstrap',
> '--label', 'managed_by=paunch', '--label', 'config_data={\\"environment\\":
> [\\"KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\\", \\"KOLLA_BOOTSTRAP=True\\",
> \\"KOLLA_KUBERNETES=True\\", \\"DB_MAX_TIMEOUT=60\\",
> \\"DB_CLUSTERCHECK_PASSWORD=tbMPbVBqWYJCEdp4wkZtBus34\\",
> \\"DB_ROOT_PASSWORD=2x1DFnpXvM\\"], \\"start_order\\": 1, \\"command\\":
> [\\"bash\\", \\"-ec\\", \\"if [ -e /var/lib/mysql/mysql ]; then exit 0;
> fi\\\\nkolla_start\\\\nmysqld_safe --skip-networking --wsrep-on=OFF
> &\\\\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \\'until mysqladmin -uroot
> -p\\\\\\"${DB_ROOT_PASSWORD}\\\\\\" ping 2>/dev/null; do sleep 1;
> done\\'\\\\nmysql -uroot -p\\\\\\"${DB_ROOT_PASSWORD}\\\\\\" -e
> \\\\\\"CREATE USER \\'clustercheck\\'@\\'localhost\\' IDENTIFIED BY
> \\'${DB_CLUSTERCHECK_PASSWORD}\\';\\\\\\"\\\\nmysql -uroot
> -p\\\\\\"${DB_ROOT_PASSWORD}\\\\\\" -e \\\\\\"GRANT PROCESS ON
> container_images.yaml debug.yaml deploy_info.sh instackenv.json
> oc_ironic.yaml openstack_failures_long.log overcloud_deployment_53.log
> overcloud_deploy.sh overcloud_install.log undercloud.conf
> undercloud_deploy.sh undercloud_install.log undercloud-passwords.conf TO
> \\'clustercheck\\'@\\'localhost\\' WITH GRANT OPTION;\\\\\\"\\\\ntimeout
> ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\\\\\\"${DB_ROOT_PASSWORD}\\\\\\"
> shutdown\\"], \\"volumes\\": [\\"/etc/hosts:/etc/hosts:ro\\",
> \\"/etc/localtime:/etc/localtime:ro\\", \\"/etc/puppet:/etc/puppet:ro\\",
> \\"/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\\",
> \\"/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\\",
> \\"/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.
> crt:ro\\", \\"/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\\",
> \\"/dev/log:/dev/log\\",
> \\"/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\\",
> \\"/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.
> json\\",
> \\"/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/
> src:ro\\", \\"/var/lib/mysql:/var/lib/mysql\\"], \\"image\\":
> \\"192.168.24.1:8787/rhosp12/openstack-mariadb-docker:2017-08-28.10\\",
> \\"detach\\": false, \\"net\\": \\"host\\"}',
> '--env=KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', '--env=KOLLA_BOOTSTRAP=True',
> '--env=KOLLA_KUBERNETES=True', '--env=DB_MAX_TIMEOUT=60',
> '--env=DB_CLUSTERCHECK_PASSWORD=tbMPbVBqWYJCEdp4wkZtBus34',
> '--env=DB_ROOT_PASSWORD=2x1DFnpXvM', '--net=host',
> '--volume=/etc/hosts:/etc/hosts:ro',
> '--volume=/etc/localtime:/etc/localtime:ro',
> '--volume=/etc/puppet:/etc/puppet:ro',
> '--volume=/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro',
> '--volume=/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:
> ro',
> '--volume=/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-
> bundle.trust.crt:ro',
> '--volume=/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro',
> '--volume=/dev/log:/dev/log',
> '--volume=/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro',
> '--volume=/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/
> config.json',
> '--volume=/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/
> config_files/src:ro', '--volume=/var/lib/mysql:/var/lib/mysql',
> '192.168.24.1:8787/rhosp12/openstack-mariadb-docker:2017-08-28.10', 'bash',
> '-ec', 'if [ -e /var/lib/mysql/mysql ]; then exit 0;
> fi\\nkolla_start\\nmysqld_safe --skip-networking --wsrep-on=OFF &\\ntimeout
> ${DB_MAX_TIMEOUT} /bin/bash -c \\'until mysqladmin -uroot
> -p\\"${DB_ROOT_PASSWORD}\\" ping 2>/dev/null; do sleep 1; done\\'\\nmysql
> -uroot -p\\"${DB_ROOT_PASSWORD}\\" -e \\"CREATE USER
> \\'clustercheck\\'@\\'localhost\\' IDENTIFIED BY
> \\'${DB_CLUSTERCHECK_PASSWORD}\\';\\"\\nmysql -uroot
> -p\\"${DB_ROOT_PASSWORD}\\" -e \\"GRANT PROCESS ON container_images.yaml
> debug.yaml deploy_info.sh instackenv.json oc_ironic.yaml
> openstack_failures_long.log overcloud_deployment_53.log overcloud_deploy.sh
> overcloud_install.log undercloud.conf undercloud_deploy.sh
> undercloud_install.log undercloud-passwords.conf TO
> \\'clustercheck\\'@\\'localhost\\' WITH GRANT OPTION;\\"\\ntimeout
> ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\\"${DB_ROOT_PASSWORD}\\" shutdown'].
> [124]\", 
> 
> 
> Full deployment show output:
> http://file.rdu.redhat.com/~dyasny/logs/debug.txt

This is reported here: https://bugzilla.redhat.com/show_bug.cgi?id=1486420

Comment 9 Jon Schlueter 2017-08-30 19:55:55 UTC
container image is built and will be part of OSP 12, scratch build is included in latest puddle tags for testing

Comment 10 Dan Prince 2017-08-31 01:27:56 UTC
Until we get the container as a downstream workaround you can work past this issue by changing the ContainersLogrotateContainer in openstack-tripleo-heat-templates's environment/docker.yaml file to this:

OS::TripleO::Services::ContainersLogrotateCrond: OS::Heat::None

Comment 11 Dan Prince 2017-09-06 12:56:51 UTC
The container exists and can now be pulled from the registry via the following:

docker-registry.engineering.redhat.com/rhosp12/openstack-cron-docker:2017-09-05.9

Comment 12 Omri Hochman 2017-09-06 14:01:19 UTC
The container-image.yaml is including the right content. Unable to reproduce.

Comment 19 errata-xmlrpc 2017-12-13 19:15:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3457


Note You need to log in before you can comment on or make changes to this bug.