Undercloud deployment fails due to containers not able to start, reports two warnings: - missing /etc/localtime - missing /usr/libexec/iptables/iptables.init Compose used was RHOS_TRUNK-15.0-RHEL-8-20190311.n.1 container images with tag 20190311.1. > [root@undercloud-0 stack]# grep ERROR undercloud-ansible-i82ml3sp/ansible.log > "2019-03-12 22:01:29,627 ERROR: 32858 -- Failed running container for crond", > "2019-03-12 22:01:29,695 ERROR: 32862 -- Failed running container for heat_api_cfn", > "2019-03-12 22:01:29,812 ERROR: 32860 -- Failed running container for haproxy", > "2019-03-12 22:01:29,898 ERROR: 32863 -- Failed running container for heat", > "2019-03-12 22:01:30,103 ERROR: 32861 -- Failed running container for heat_api", > "2019-03-12 22:01:30,160 ERROR: 32859 -- Failed running container for glance_api", > "2019-03-12 22:01:51,031 ERROR: 32861 -- Failed running container for iscsid", > "2019-03-12 22:01:51,474 ERROR: 32859 -- Failed running container for keepalived", > "2019-03-12 22:01:51,612 ERROR: 32858 -- Failed running container for ironic_api", > "2019-03-12 22:01:51,748 ERROR: 32862 -- Failed running container for ironic", > "2019-03-12 22:01:51,961 ERROR: 32860 -- Failed running container for ironic_inspector", > "2019-03-12 22:01:52,016 ERROR: 32863 -- Failed running container for neutron", > "2019-03-12 22:02:11,785 ERROR: 32861 -- Failed running container for keystone", > "2019-03-12 22:02:26,505 ERROR: 32859 -- Failed running container for memcached", > "2019-03-12 22:02:34,673 ERROR: 32862 -- Failed running container for mysql", > "2019-03-12 22:02:34,831 ERROR: 32858 -- Failed running container for mistral", > "2019-03-12 22:02:34,935 ERROR: 32863 -- Failed running container for nova_metadata", > "2019-03-12 22:02:35,102 ERROR: 32860 -- Failed running container for nova", > "2019-03-12 22:02:38,010 ERROR: 32861 -- Failed running container for nova_placement", > "2019-03-12 22:02:42,150 ERROR: 32859 -- Failed running container for rabbitmq", > "2019-03-12 22:02:54,861 ERROR: 32863 -- Failed running container for tripleo-ui", > "2019-03-12 22:02:55,010 ERROR: 32860 -- Failed running container for zaqar", > "2019-03-12 22:02:55,209 ERROR: 32858 -- Failed running container for swift_ringbuilder", > "2019-03-12 22:02:55,268 ERROR: 32862 -- Failed running container for swift", I've picked glance as example for rest of info, seems same with others: from /var/log/tripleo-container-image-prepare.log obtained images were: > DockerGlanceApiConfigImage: 192.168.24.1:8787/rhosp15/openstack-glance-api:20190311.1 > DockerGlanceApiImage: 192.168.24.1:8787/rhosp15/openstack-glance-api:20190311.1 > DockerGlanceApiConfigImage: 192.168.24.1:8787/rhosp15/openstack-glance-api:20190311.1 > DockerGlanceApiImage: 192.168.24.1:8787/rhosp15/openstack-glance-api:20190311.1 from ansible.log: > [root@undercloud-0 stack]# grep glance undercloud-ansible-i82ml3sp/ansible.log > 2019-03-12 17:54:19,585 p=26645 u=root | changed: [undercloud-0] => (item={'path': '/var/log/containers/glance', 'setype': 'svirt_sandbox_file_t'}) > 2019-03-12 17:54:19,736 p=26645 u=root | changed: [undercloud-0] => (item={'path': '/var/log/glance', 'setype': 'svirt_sandbox_file_t'}) > 2019-03-12 17:54:19,767 p=26645 u=root | TASK [glance logs readme] ****************************************************** > 2019-03-12 17:54:20,481 p=26645 u=root | TASK [ensure /var/lib/glance exists] ******************************************* > 2019-03-12 18:00:02,921 p=26645 u=root | changed: [undercloud-0] => (item=/var/lib/kolla/config_files/glance_api.json) > 2019-03-12 18:00:03,252 p=26645 u=root | changed: [undercloud-0] => (item=/var/lib/kolla/config_files/glance_api_tls_proxy.json) > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/ensure: created", > "Warning: ModuleLoader: module 'glance' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules\\n (file & line not available)", > "Warning: Firewall[112 glance_api ipv4](provider=iptables): Unable to persist firewall rules: Execution of '/usr/libexec/iptables/iptables.init save' returned 1: Error: Could not execute posix command: No such file or directory - /usr/libexec/iptables/iptables.init", > "Warning: Firewall[112 glance_api ipv6](provider=ip6tables): Unable to persist firewall rules: Execution of '/usr/libexec/iptables/ip6tables.init save' returned 1: Error: Could not execute posix command: No such file or directory - /usr/libexec/iptables/ip6tables.init", > "2019-03-12 22:00:54,677 INFO: 32859 -- Starting configuration of glance_api using image 192.168.24.1:8787/rhosp15/openstack-glance-api:20190311.1", > "2019-03-12 22:00:55,173 INFO: 32859 -- Removing container: container-puppet-glance_api", > "2019-03-12 22:00:56,176 INFO: 32859 -- Pulling image: 192.168.24.1:8787/rhosp15/openstack-glance-api:20190311.1", > "2019-03-12 22:01:23,833 WARNING: 32859 -- ['/usr/bin/podman', 'run', '--user', 'root', '--name', 'container-puppet-glance_api', '--env', 'PUPPET_TAGS=file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config', '--env', 'NAME=glance_api', '--env', 'HOSTNAME=undercloud-0', '--env', 'NO_ARCHIVE=', '--env', 'STEP=6', '--env', 'NET_HOST=true', '--log-driver', 'json-file', '--volume', '/etc/localtime:/etc/localtime:ro', '--volume', '/tmp/tmp6gj6jhyk:/etc/config.pp:ro', '--volume', '/etc/puppet/:/tmp/puppet-etc/:ro', '--volume', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '--volume', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '--volume', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '--volume', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '--volume', '/var/lib/config-data:/var/lib/config-data/:rw', '--volume', '/dev/log:/dev/log:rw', '--log-opt', 'path=/var/log/containers/stdouts/container-puppet-glance_api.log', '--security-opt', 'label=disable', '--volume', '/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', '--entrypoint', '/var/lib/container-puppet/container-puppet.sh', '--net', 'host', '--volume', '/etc/hosts:/etc/hosts:ro', '--volume', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '192.168.24.1:8787/rhosp15/openstack-glance-api:20190311.1'] run failed after error checking path \"/etc/localtime\": stat /etc/localtime: no such file or directory", > "2019-03-12 22:01:23,834 WARNING: 32859 -- Retrying running container: glance_api", > "2019-03-12 22:01:27,001 WARNING: 32859 -- ['/usr/bin/podman', 'start', '-a', 'container-puppet-glance_api'] run failed after unable to find container container-puppet-glance_api: no container with name or ID container-puppet-glance_api found: no such container", > "2019-03-12 22:01:27,002 WARNING: 32859 -- Retrying running container: glance_api", > "2019-03-12 22:01:30,160 WARNING: 32859 -- ['/usr/bin/podman', 'start', '-a', 'container-puppet-glance_api'] run failed after unable to find container container-puppet-glance_api: no container with name or ID container-puppet-glance_api found: no such container", > "2019-03-12 22:01:30,160 WARNING: 32859 -- Retrying running container: glance_api", > "2019-03-12 22:01:30,160 ERROR: 32859 -- Failed running container for glance_api", > "2019-03-12 22:01:30,160 INFO: 32859 -- Finished processing puppet configs for glance_api", > "2019-03-12 22:02:55,269 ERROR: 32857 -- ERROR configuring glance_api", Seems that "./etc/localtime" is present in image layer 6dc95b5438fb: > # tar tvf 6dc95b5438fb733032e890ccb01c7be14016df7b2e86f15aa56bc891c692a023.tar | grep localtime > lrwxrwxrwx root/root 0 2019-02-28 21:50 ./etc/localtime -> ../usr/share/zoneinfo/Etc/UTC after removing the '--volume /etc/localtime:/etc/localtime:ro' from parameters it does not have this issue anymore, maybe some issue with access or --volume behaviour? (It does fail then on config.pp, expected as it is not in /tmp anymore when running manually.) (There is no selinux denial related to this, only about logrotate not allowed in containers.)
The issue isn't related to podman or containers at all, but purely with localtime: [root@undercloud ~]# ls -al /etc/localtime lrwxrwxrwx. 1 root root 23 Mar 13 00:35 /etc/localtime -> /usr/share/zoneinfo/EDT [root@undercloud ~]# ls /usr/share/zoneinfo/EDT ls: cannot access '/usr/share/zoneinfo/EDT': No such file or directory
I think Alex is fixing it with https://review.openstack.org/#/c/642589
Just to note, updating symlink on UC node first manually (e.g. `ln -snf /usr/share/zoneinfo/UTC /etc/localtime`) as workaround, does work and makes undercloud deployment pass for me.
I'll have to check if the switch to ansible addresses this issue.
I believe this is related to a broken RHEL8 image being used. We don't configure EDT so if that's wrong prior to deployment of the undercloud then that's coming from the base image used. I just checked out a newer RHEL8 guest image and /etc/localtime is correctly set to ../usr/share/zoneinfo/America/New_York [cloud-user@undercloud ~]$ ls -al /etc/localtime lrwxrwxrwx. 1 root root 38 Mar 13 14:31 /etc/localtime -> ../usr/share/zoneinfo/America/New_York
I finally hit this. Will dig into it deeper.
"Notice: /Stage[main]/Timezone/File[/etc/localtime]/target: target changed '../usr/share/zoneinfo/America/New_ York' to '/usr/share/zoneinfo/EDT'", This is likely a bug in puppet-timezone which would be addressed by the referenced patch to switch to ansible.
*** Bug 1689396 has been marked as a duplicate of this bug. ***
The timezone is fixed, deployment successful. Verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:2811