Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1476863 Details for
Bug 1618983
openstack deployment failed - Failed running docker-puppet.py for haproxy
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
failures list logs
openstack_failures_long.log (text/plain), 1.81 MB, created by
Ronnie Rasouli
on 2018-08-19 08:30:58 UTC
(
hide
)
Description:
failures list logs
Filename:
MIME Type:
Creator:
Ronnie Rasouli
Created:
2018-08-19 08:30:58 UTC
Size:
1.81 MB
patch
obsolete
>overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.1: > resource_type: OS::Heat::StructuredDeployment > physical_resource_id: f8070133-0ac0-4304-8f65-b09b355e243e > status: CREATE_FAILED > status_reason: | > Error: resources[1]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 2 > deploy_stdout: | > > PLAY [localhost] *************************************************************** > > TASK [Gathering Facts] ********************************************************* > ok: [localhost] > > TASK [Create /var/lib/tripleo-config directory] ******************************** > changed: [localhost] > > TASK [Check if puppet step_config.pp manifest exists] ************************** > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Write the puppet step_config manifest] *********************************** > changed: [localhost] > > TASK [Create /var/lib/docker-puppet] ******************************************* > changed: [localhost] > > TASK [Check if docker-puppet puppet_config.yaml configuration file exists] ***** > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Write docker-puppet.json file] ******************************************* > changed: [localhost] > > TASK [Create /var/lib/docker-config-scripts] *********************************** > changed: [localhost] > > TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** > ok: [localhost] > > TASK [Check if docker_config_scripts.yaml file exists] ************************* > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Write docker config scripts] ********************************************* > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', u'mode': u'0700'}, 'key': u'nova_api_discover_hosts.sh'}) > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', u'mode': u'0700'}, 'key': u'create_swift_secret.sh'}) > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', u'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', u'mode': u'0700'}, 'key': u'set_swift_keymaster_key_id.sh'}) > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', u'mode': u'0700'}, 'key': u'docker_puppet_apply.sh'}) > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', u'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) > > TASK [Set docker_config_default fact] ****************************************** > ok: [localhost] => (item=None) > ok: [localhost] => (item=None) > ok: [localhost] => (item=None) > ok: [localhost] => (item=None) > ok: [localhost] => (item=None) > ok: [localhost] => (item=None) > ok: [localhost] > > TASK [Check if docker_config.yaml file exists] ********************************* > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Set docker_startup_configs_with_default fact] **************************** > ok: [localhost] > > TASK [Write docker-container-startup-configs] ********************************** > changed: [localhost] > > TASK [Write per-step docker-container-startup-configs] ************************* > changed: [localhost] => (item={'value': {u'cinder_volume_image_tag': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-08-14.4' '192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], u'net': u'host', u'detach': False}, u'mysql_image_tag': {u'start_order': 2, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4' '192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], u'net': u'host', u'detach': False}, u'mysql_data_ownership': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], u'user': u'root', u'volumes': [u'/var/lib/mysql:/var/lib/mysql'], u'net': u'host', u'detach': False}, u'redis_image_tag': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4' '192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], u'net': u'host', u'detach': False}, u'mysql_bootstrap': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=wQHWYDMtN2zP34A7ppnf36KgZ', u'DB_ROOT_PASSWORD=nqmpfBXNCf'], u'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], u'net': u'host', u'detach': False}, u'haproxy_image_tag': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4' '192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], u'net': u'host', u'detach': False}, u'rabbitmq_image_tag': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4' '192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], u'net': u'host', u'detach': False}, u'rabbitmq_bootstrap': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=2vn7bpVGQM3wmDdKDet3'], u'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], u'net': u'host', u'privileged': False}, u'memcached': {u'start_order': 0, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-memcached:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}}, 'key': u'step_1'}) > changed: [localhost] => (item={'value': {u'nova_placement': {u'start_order': 1, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'restart': u'always'}, u'swift_rsync_fix': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'sed -i "/pid file/d" /var/lib/kolla/config_files/src/etc/rsyncd.conf'], u'user': u'root', u'volumes': [u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw'], u'net': u'host', u'detach': False}, u'nova_db_sync': {u'start_order': 3, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], u'net': u'host', u'detach': False}, u'heat_engine_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-engine:2018-08-14.4', u'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], u'net': u'host', u'detach': False, u'privileged': False}, u'swift_copy_rings': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4', u'detach': False, u'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], u'user': u'root', u'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, u'nova_api_ensure_default_cell': {u'start_order': 2, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], u'net': u'host', u'detach': False}, u'keystone_cron': {u'start_order': 4, u'image': u'192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'panko_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-panko-api:2018-08-14.4', u'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], u'net': u'host', u'detach': False, u'privileged': False}, u'nova_api_db_sync': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], u'net': u'host', u'detach': False}, u'iscsid': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-iscsid:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'keystone_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4', u'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'detach': False, u'privileged': False}, u'ceilometer_init_log': {u'start_order': 0, u'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], u'image': u'192.168.24.1:8787/rhosp13/openstack-ceilometer-notification:2018-08-14.4', u'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], u'user': u'root'}, u'keystone': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'aodh_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-08-14.4', u'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], u'net': u'host', u'detach': False, u'privileged': False}, u'cinder_volume_init_logs': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], u'user': u'root', u'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], u'privileged': False}, u'neutron_ovs_bridge': {u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], u'net': u'host', u'detach': False, u'privileged': True}, u'cinder_api_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4', u'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], u'net': u'host', u'detach': False, u'privileged': False}, u'nova_api_map_cell0': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], u'net': u'host', u'detach': False}, u'glance_api_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-glance-api:2018-08-14.4', u'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], u'net': u'host', u'detach': False, u'privileged': False}, u'neutron_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4', u'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], u'net': u'host', u'detach': False, u'privileged': False}, u'keystone_bootstrap': {u'action': u'exec', u'start_order': 3, u'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'XjxMBFahCQcXFECTsWUkKHBKA'], u'user': u'root'}, u'horizon': {u'image': u'192.168.24.1:8787/rhosp13/openstack-horizon:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_setup_srv': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-account:2018-08-14.4', u'command': [u'chown', u'-R', u'swift:', u'/srv/node'], u'user': u'root', u'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) > changed: [localhost] => (item={'value': {u'gnocchi_init_log': {u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], u'user': u'root', u'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, u'mysql_init_bundle': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], u'net': u'host', u'detach': False}, u'gnocchi_init_lib': {u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], u'user': u'root', u'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, u'cinder_api_init_logs': {u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], u'privileged': False, u'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], u'user': u'root'}, u'create_dnsmasq_wrapper': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-dhcp-agent:2018-08-14.4', u'pid': u'host', u'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], u'net': u'host', u'detach': False}, u'panko_init_log': {u'image': u'192.168.24.1:8787/rhosp13/openstack-panko-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], u'user': u'root', u'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, u'redis_init_bundle': {u'start_order': 2, u'image': u'192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'config_volume': u'redis_init_bundle', u'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], u'net': u'host', u'detach': False}, u'cinder_scheduler_init_logs': {u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-scheduler:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], u'privileged': False, u'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], u'user': u'root'}, u'glance_init_logs': {u'image': u'192.168.24.1:8787/rhosp13/openstack-glance-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], u'privileged': False, u'volumes': [u'/var/log/containers/glance:/var/log/glance'], u'user': u'root'}, u'clustercheck': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], u'net': u'host', u'restart': u'always'}, u'haproxy_init_bundle': {u'start_order': 3, u'image': u'192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], u'net': u'host', u'detach': False, u'privileged': True}, u'neutron_init_logs': {u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], u'privileged': False, u'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], u'user': u'root'}, u'mysql_restart_bundle': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'config_volume': u'mysql', u'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'detach': False}, u'rabbitmq_init_bundle': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], u'net': u'host', u'detach': False}, u'nova_api_init_logs': {u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], u'privileged': False, u'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], u'user': u'root'}, u'haproxy_restart_bundle': {u'start_order': 2, u'image': u'192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4', u'config_volume': u'haproxy', u'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'detach': False}, u'create_keepalived_wrapper': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-l3-agent:2018-08-14.4', u'pid': u'host', u'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], u'net': u'host', u'detach': False}, u'rabbitmq_restart_bundle': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4', u'config_volume': u'rabbitmq', u'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'detach': False}, u'horizon_fix_perms': {u'image': u'192.168.24.1:8787/rhosp13/openstack-horizon:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], u'user': u'root', u'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, u'aodh_init_log': {u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], u'user': u'root', u'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, u'nova_metadata_init_log': {u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], u'privileged': False, u'volumes': [u'/var/log/containers/nova:/var/log/nova'], u'user': u'root'}, u'redis_restart_bundle': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4', u'config_volume': u'redis', u'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'detach': False}, u'heat_init_log': {u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-engine:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], u'user': u'root', u'volumes': [u'/var/log/containers/heat:/var/log/heat']}, u'nova_placement_init_log': {u'start_order': 1, u'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-08-14.4', u'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], u'user': u'root'}, u'keystone_init_log': {u'start_order': 1, u'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], u'image': u'192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4', u'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], u'user': u'root'}}, 'key': u'step_2'}) > changed: [localhost] => (item={'value': {u'cinder_volume_init_bundle': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], u'net': u'host', u'detach': False}, u'gnocchi_api': {u'start_order': 1, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'gnocchi_statsd': {u'start_order': 1, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-statsd:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'gnocchi_metricd': {u'start_order': 1, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-metricd:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_api_discover_hosts': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], u'net': u'host', u'detach': False}, u'ceilometer_gnocchi_upgrade': {u'start_order': 99, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-08-14.4', u'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], u'net': u'host', u'detach': False, u'privileged': False}, u'cinder_volume_restart_bundle': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-08-14.4', u'config_volume': u'cinder', u'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'detach': False}, u'gnocchi_db_sync': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], u'net': u'host', u'detach': False, u'privileged': False}}, 'key': u'step_5'}) > changed: [localhost] => (item={'value': {u'swift_container_updater': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-container:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'aodh_evaluator': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-evaluator:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_scheduler': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-scheduler:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_object_server': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'cinder_api': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_proxy': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], u'net': u'host', u'restart': u'always'}, u'neutron_dhcp': {u'start_order': 10, u'ulimit': [u'nofile=1024'], u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-dhcp-agent:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'heat_api': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_object_auditor': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'neutron_metadata_agent': {u'start_order': 10, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-metadata-agent:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'ceilometer_agent_central': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'keystone_refresh': {u'action': u'exec', u'start_order': 1, u'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], u'user': u'root'}, u'swift_account_replicator': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-account:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'aodh_notifier': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-notifier:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_api_cron': {u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_consoleauth': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-consoleauth:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'glance_api': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-glance-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_account_reaper': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-account:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'ceilometer_agent_notification': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-ceilometer-notification:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_vnc_proxy': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-novncproxy:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_rsync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'nova_api': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'aodh_api': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_metadata': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'nova', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'heat_engine': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-engine:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_container_server': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-container:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'swift_object_replicator': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'neutron_l3_agent': {u'start_order': 10, u'ulimit': [u'nofile=1024'], u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-l3-agent:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'cinder_scheduler': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-scheduler:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_conductor': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-conductor:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'heat_api_cfn': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-api-cfn:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'neutron_ovs_agent': {u'start_order': 10, u'ulimit': [u'nofile=1024'], u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-openvswitch-agent:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'cinder_api_cron': {u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_account_auditor': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-account:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'swift_container_replicator': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-container:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'swift_object_updater': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'swift_object_expirer': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'heat_api_cron': {u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_container_auditor': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-container:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'panko_api': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-panko-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'aodh_listener': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-listener:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'neutron_api': {u'start_order': 0, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_account_server': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-account:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'logrotate_crond': {u'image': u'192.168.24.1:8787/rhosp13/openstack-cron:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], u'net': u'none', u'privileged': True, u'restart': u'always'}}, 'key': u'step_4'}) > changed: [localhost] => (item={'value': {}, 'key': u'step_6'}) > > TASK [Create /var/lib/kolla/config_files directory] **************************** > changed: [localhost] > > TASK [Check if kolla_config.yaml file exists] ********************************** > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Write kolla config json files] ******************************************* > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/keystone.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_replicator.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/nova-scheduler ', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_scheduler.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/crond -n', u'permissions': [{u'owner': u'heat:heat', u'path': u'/var/log/heat', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cron.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_reaper.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_vnc_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_auditor.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_auditor.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src-panko/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', u'permissions': [{u'owner': u'root:ceilometer', u'path': u'/etc/panko', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'heat:heat', u'path': u'/var/log/heat', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_updater.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_replicator.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/neutron_ovs_agent_launcher.sh', u'permissions': [{u'owner': u'neutron:neutron', u'path': u'/var/log/neutron', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_ovs_agent.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/etc/libqb/force-filesystem-sockets', u'source': u'/dev/null', u'owner': u'root', u'perm': u'0644'}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src-tls/*', u'merge': True, u'optional': True, u'preserve_properties': True}], u'command': u'/usr/sbin/pacemaker_remoted', u'permissions': [{u'owner': u'rabbitmq:rabbitmq', u'path': u'/var/lib/rabbitmq', u'recurse': True}, {u'owner': u'rabbitmq:rabbitmq', u'path': u'/var/log/rabbitmq', u'recurse': True}, {u'owner': u'rabbitmq:rabbitmq', u'path': u'/etc/pki/tls/certs/rabbitmq.crt', u'optional': True, u'perm': u'0600'}, {u'owner': u'rabbitmq:rabbitmq', u'path': u'/etc/pki/tls/private/rabbitmq.key', u'optional': True, u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/rabbitmq.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', u'permissions': [{u'owner': u'cinder:cinder', u'path': u'/var/log/cinder', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_scheduler.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/gnocchi-metricd', u'permissions': [{u'owner': u'gnocchi:gnocchi', u'path': u'/var/log/gnocchi', u'recurse': True}, {u'owner': u'gnocchi:gnocchi', u'path': u'/etc/ceph/ceph.client.openstack.keyring', u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_metricd.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_replicator.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', u'permissions': [{u'owner': u'heat:heat', u'path': u'/var/log/heat', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_engine.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', u'permissions': [{u'owner': u'swift:swift', u'path': u'/var/cache/swift', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/swift_object_server.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': u'/var/lib/kolla/config_files/redis_tls_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'gnocchi:gnocchi', u'path': u'/var/log/gnocchi', u'recurse': True}, {u'owner': u'gnocchi:gnocchi', u'path': u'/etc/ceph/ceph.client.openstack.keyring', u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/iscsi/', u'source': u'/var/lib/kolla/config_files/src-iscsid/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', u'permissions': [{u'owner': u'cinder:cinder', u'path': u'/var/log/cinder', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_volume.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'panko:panko', u'path': u'/var/log/panko', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/panko_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_auditor.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', u'permissions': [{u'owner': u'neutron:neutron', u'path': u'/var/log/neutron', u'recurse': True}, {u'owner': u'neutron:neutron', u'path': u'/var/lib/neutron', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_l3_agent.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/aodh-listener', u'permissions': [{u'owner': u'aodh:aodh', u'path': u'/var/log/aodh', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_listener.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_server.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'apache:apache', u'path': u'/var/log/horizon/', u'recurse': True}, {u'owner': u'apache:apache', u'path': u'/etc/openstack-dashboard/', u'recurse': True}, {u'owner': u'apache:apache', u'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', u'recurse': False}, {u'owner': u'apache:apache', u'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', u'recurse': False}]}, 'key': u'/var/lib/kolla/config_files/horizon.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', u'permissions': [{u'owner': u'neutron:neutron', u'path': u'/var/log/neutron', u'recurse': True}, {u'owner': u'neutron:neutron', u'path': u'/var/lib/neutron', u'recurse': True}, {u'owner': u'neutron:neutron', u'path': u'/etc/pki/tls/certs/neutron.crt'}, {u'owner': u'neutron:neutron', u'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': u'/var/lib/kolla/config_files/neutron_dhcp.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', u'permissions': [{u'owner': u'glance:glance', u'path': u'/var/lib/glance', u'recurse': True}, {u'owner': u'glance:glance', u'path': u'/etc/ceph/ceph.client.openstack.keyring', u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/glance_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/etc/libqb/force-filesystem-sockets', u'source': u'/dev/null', u'owner': u'root', u'perm': u'0644'}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src-tls/*', u'merge': True, u'optional': True, u'preserve_properties': True}], u'command': u'/usr/sbin/pacemaker_remoted', u'permissions': [{u'owner': u'mysql:mysql', u'path': u'/var/log/mysql', u'recurse': True}, {u'owner': u'mysql:mysql', u'path': u'/etc/pki/tls/certs/mysql.crt', u'optional': True, u'perm': u'0600'}, {u'owner': u'mysql:mysql', u'path': u'/etc/pki/tls/private/mysql.key', u'optional': True, u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/mysql.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/crond -n', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api_cron.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', u'permissions': [{u'owner': u'gnocchi:gnocchi', u'path': u'/var/log/gnocchi', u'recurse': True}, {u'owner': u'gnocchi:gnocchi', u'path': u'/etc/ceph/ceph.client.openstack.keyring', u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_db_sync.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_placement.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/nova-api-metadata ', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_metadata.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/nova-consoleauth ', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_consoleauth.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_central.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', u'permissions': [{u'owner': u'neutron:neutron', u'path': u'/var/log/neutron', u'recurse': True}, {u'owner': u'neutron:neutron', u'path': u'/var/lib/neutron', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_metadata_agent.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': u'/var/lib/kolla/config_files/swift_rsync.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_server.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/crond -n', u'permissions': [{u'owner': u'cinder:cinder', u'path': u'/var/log/cinder', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api_cron.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'optional': True, u'preserve_properties': True}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src-tls/*', u'merge': True, u'optional': True, u'preserve_properties': True}], u'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', u'permissions': [{u'owner': u'haproxy:haproxy', u'path': u'/var/lib/haproxy', u'recurse': True}, {u'owner': u'haproxy:haproxy', u'path': u'/etc/pki/tls/certs/haproxy/*', u'optional': True, u'perm': u'0600'}, {u'owner': u'haproxy:haproxy', u'path': u'/etc/pki/tls/private/haproxy/*', u'optional': True, u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/haproxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/aodh-notifier', u'permissions': [{u'owner': u'aodh:aodh', u'path': u'/var/log/aodh', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_notifier.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'aodh:aodh', u'path': u'/var/log/aodh', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/crond -n', u'permissions': [{u'owner': u'keystone:keystone', u'path': u'/var/log/keystone', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/keystone_cron.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'heat:heat', u'path': u'/var/log/heat', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cfn.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/nova-conductor ', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_conductor.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/etc/iscsi/', u'source': u'/var/lib/kolla/config_files/src-iscsid/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/iscsid -f'}, 'key': u'/var/lib/kolla/config_files/iscsid.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/etc/libqb/force-filesystem-sockets', u'source': u'/dev/null', u'owner': u'root', u'perm': u'0644'}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'optional': True, u'preserve_properties': True}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src-tls/*', u'merge': True, u'optional': True, u'preserve_properties': True}], u'command': u'/usr/sbin/pacemaker_remoted', u'permissions': [{u'owner': u'redis:redis', u'path': u'/var/run/redis', u'recurse': True}, {u'owner': u'redis:redis', u'path': u'/var/lib/redis', u'recurse': True}, {u'owner': u'redis:redis', u'path': u'/var/log/redis', u'recurse': True}, {u'owner': u'redis:redis', u'path': u'/etc/pki/tls/certs/redis.crt', u'optional': True, u'perm': u'0600'}, {u'owner': u'redis:redis', u'path': u'/etc/pki/tls/private/redis.key', u'optional': True, u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/redis.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_expirer.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', u'permissions': [{u'owner': u'neutron:neutron', u'path': u'/var/log/neutron', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'cinder:cinder', u'path': u'/var/log/cinder', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/xinetd -dontfork'}, 'key': u'/var/lib/kolla/config_files/clustercheck.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/aodh-evaluator', u'permissions': [{u'owner': u'aodh:aodh', u'path': u'/var/log/aodh', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_evaluator.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_updater.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/gnocchi-statsd', u'permissions': [{u'owner': u'gnocchi:gnocchi', u'path': u'/var/log/gnocchi', u'recurse': True}, {u'owner': u'gnocchi:gnocchi', u'path': u'/etc/ceph/ceph.client.openstack.keyring', u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_statsd.json'}) > > TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ > > TASK [Check if docker_puppet_tasks.yaml file exists] *************************** > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Write docker-puppet-tasks json files] ************************************ > skipping: [localhost] => (item={'value': [{u'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', u'config_volume': u'keystone_init_tasks', u'step_config': u'include ::tripleo::profile::base::keystone', u'config_image': u'192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4'}], 'key': u'step_3'}) > > TASK [Set host puppet debugging fact string] *********************************** > ok: [localhost] > > TASK [Write the config_step hieradata] ***************************************** > changed: [localhost] > > TASK [Run puppet host configuration for step 1] ******************************** > changed: [localhost] > > TASK [Debug output for task which failed: Run puppet host configuration for step 1] *** > ok: [localhost] => { > "failed_when_result": false, > "outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": [ > "Debug: Runtime environment: puppet_version=4.8.2, ruby_version=2.0.0, run_mode=user, default_encoding=UTF-8", > "Debug: Evicting cache entry for environment 'production'", > "Debug: Caching environment 'production' (ttl = 0 sec)", > "Debug: Loading external facts from /etc/puppet/modules/openstacklib/facts.d", > "Debug: Loading external facts from /var/lib/puppet/facts.d", > "Info: Loading facts", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /etc/puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /etc/puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /etc/puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /etc/puppet/modules/pacemaker/lib/facter/pcmk_is_remote.rb", > "Debug: Loading facts from /etc/puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /etc/puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /etc/puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /etc/puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/pacemaker/lib/facter/pcmk_is_remote.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Facter: Found no suitable resolves of 1 for ec2_metadata", > "Debug: Facter: value for ec2_metadata is still nil", > "Debug: Failed to load library 'cfpropertylist' for feature 'cfpropertylist'", > "Debug: Executing: '/usr/bin/rpm --version'", > "Debug: Executing: '/usr/bin/rpm -ql rpm'", > "Debug: Facter: value for agent_specified_environment is still nil", > "Debug: Facter: Found no suitable resolves of 1 for system32", > "Debug: Facter: value for system32 is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistid", > "Debug: Facter: value for lsbdistid is still nil", > "Debug: Facter: value for ipaddress6 is still nil", > "Debug: Facter: value for network_br_isolated is still nil", > "Debug: Facter: value for network_eth1 is still nil", > "Debug: Facter: value for network_eth2 is still nil", > "Debug: Facter: value for network_ovs_system is still nil", > "Debug: Facter: value for vlans is still nil", > "Debug: Facter: value for is_rsc is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_region", > "Debug: Facter: value for rsc_region is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_instance_id", > "Debug: Facter: value for rsc_instance_id is still nil", > "Debug: Facter: value for cfkey is still nil", > "Debug: Facter: Found no suitable resolves of 1 for processor", > "Debug: Facter: value for processor is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbminordistrelease", > "Debug: Facter: value for lsbminordistrelease is still nil", > "Debug: Facter: value for ipaddress6_br_ex is still nil", > "Debug: Facter: value for ipaddress_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_br_isolated is still nil", > "Debug: Facter: value for netmask_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_eth0 is still nil", > "Debug: Facter: value for ipaddress_eth1 is still nil", > "Debug: Facter: value for ipaddress6_eth1 is still nil", > "Debug: Facter: value for netmask_eth1 is still nil", > "Debug: Facter: value for ipaddress_eth2 is still nil", > "Debug: Facter: value for ipaddress6_eth2 is still nil", > "Debug: Facter: value for netmask_eth2 is still nil", > "Debug: Facter: value for ipaddress6_lo is still nil", > "Debug: Facter: value for macaddress_lo is still nil", > "Debug: Facter: value for ipaddress_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_ovs_system is still nil", > "Debug: Facter: value for netmask_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_vlan20 is still nil", > "Debug: Facter: value for ipaddress6_vlan30 is still nil", > "Debug: Facter: value for ipaddress6_vlan40 is still nil", > "Debug: Facter: value for ipaddress6_vlan50 is still nil", > "Debug: Facter: Found no suitable resolves of 1 for zonename", > "Debug: Facter: value for zonename is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbrelease", > "Debug: Facter: value for lsbrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbmajdistrelease", > "Debug: Facter: value for lsbmajdistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistcodename", > "Debug: Facter: value for lsbdistcodename is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistdescription", > "Debug: Facter: value for lsbdistdescription is still nil", > "Debug: Facter: Found no suitable resolves of 1 for xendomains", > "Debug: Facter: value for xendomains is still nil", > "Debug: Facter: Found no suitable resolves of 2 for swapencrypted", > "Debug: Facter: value for swapencrypted is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistrelease", > "Debug: Facter: value for lsbdistrelease is still nil", > "Debug: Facter: value for zpool_version is still nil", > "Debug: Facter: value for sshdsakey is still nil", > "Debug: Facter: value for sshfp_dsa is still nil", > "Debug: Facter: value for dhcp_servers is still nil", > "Debug: Facter: Found no suitable resolves of 1 for gce", > "Debug: Facter: value for gce is still nil", > "Debug: Facter: value for zfs_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iphostnumber", > "Debug: Facter: value for iphostnumber is still nil", > "Debug: Facter: value for rabbitmq_version is still nil", > "Debug: Facter: value for erl_ssl_path is still nil", > "Debug: Facter: Matching apachectl 'Server version: Apache/2.4.6 (Red Hat Enterprise Linux)", > "Server built: May 28 2018 16:19:32'", > "Debug: Facter: value for java_version is still nil", > "Debug: Facter: value for java_major_version is still nil", > "Debug: Facter: value for java_patch_level is still nil", > "Debug: Facter: value for java_default_home is still nil", > "Debug: Facter: value for java_libjvm_path is still nil", > "Debug: Facter: value for pe_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_major_version", > "Debug: Facter: value for pe_major_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_minor_version", > "Debug: Facter: value for pe_minor_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_patch_version", > "Debug: Facter: value for pe_patch_version is still nil", > "Debug: Puppet::Type::Service::ProviderNoop: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderOpenrc: file /bin/rc-status does not exist", > "Debug: Puppet::Type::Service::ProviderInit: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderLaunchd: file /bin/launchctl does not exist", > "Debug: Puppet::Type::Service::ProviderDebian: file /usr/sbin/update-rc.d does not exist", > "Debug: Puppet::Type::Service::ProviderUpstart: 0 confines (of 4) were true", > "Debug: Puppet::Type::Service::ProviderDaemontools: file /usr/bin/svc does not exist", > "Debug: Puppet::Type::Service::ProviderRunit: file /usr/bin/sv does not exist", > "Debug: Puppet::Type::Service::ProviderGentoo: file /sbin/rc-update does not exist", > "Debug: Puppet::Type::Service::ProviderOpenbsd: file /usr/sbin/rcctl does not exist", > "Debug: Puppet::Type::Package::ProviderSensu_gem: file /opt/sensu/embedded/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderTdagent: file /opt/td-agent/usr/sbin/td-agent-gem does not exist", > "Debug: Puppet::Type::Package::ProviderDpkg: file /usr/bin/dpkg does not exist", > "Debug: Puppet::Type::Package::ProviderFink: file /sw/bin/fink does not exist", > "Debug: Puppet::Type::Package::ProviderUp2date: file /usr/sbin/up2date-nox does not exist", > "Debug: Puppet::Type::Package::ProviderPacman: file /usr/bin/pacman does not exist", > "Debug: Puppet::Type::Package::ProviderApt: file /usr/bin/apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderAptitude: file /usr/bin/aptitude does not exist", > "Debug: Puppet::Type::Package::ProviderSun: file /usr/bin/pkginfo does not exist", > "Debug: Puppet::Type::Package::ProviderUrpmi: file urpmi does not exist", > "Debug: Puppet::Type::Package::ProviderSunfreeware: file pkg-get does not exist", > "Debug: Puppet::Type::Package::ProviderOpkg: file opkg does not exist", > "Debug: Puppet::Type::Package::ProviderPuppet_gem: file /opt/puppetlabs/puppet/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderDnf: file dnf does not exist", > "Debug: Puppet::Type::Package::ProviderOpenbsd: file pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderFreebsd: file /usr/sbin/pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderAix: file /usr/bin/lslpp does not exist", > "Debug: Puppet::Type::Package::ProviderNim: file /usr/sbin/nimclient does not exist", > "Debug: Puppet::Type::Package::ProviderPkgin: file pkgin does not exist", > "Debug: Puppet::Type::Package::ProviderZypper: file /usr/bin/zypper does not exist", > "Debug: Puppet::Type::Package::ProviderPortage: file /usr/bin/emerge does not exist", > "Debug: Puppet::Type::Package::ProviderAptrpm: file apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderPkg: file /usr/bin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderHpux: file /usr/sbin/swinstall does not exist", > "Debug: Puppet::Type::Package::ProviderPortupgrade: file /usr/local/sbin/portupgrade does not exist", > "Debug: Puppet::Type::Package::ProviderPkgng: file /usr/local/sbin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderTdnf: file tdnf does not exist", > "Debug: Puppet::Type::Package::ProviderRug: file /usr/bin/rug does not exist", > "Debug: Puppet::Type::Package::ProviderPorts: file /usr/local/sbin/portupgrade does not exist", > "Debug: Facter: value for cassandrarelease is still nil", > "Debug: Facter: value for cassandrapatchversion is still nil", > "Debug: Facter: value for cassandraminorversion is still nil", > "Debug: Facter: value for cassandramajorversion is still nil", > "Debug: Facter: value for mysqld_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for staging_windir", > "Debug: Facter: value for staging_windir is still nil", > "Debug: Facter: Found no suitable resolves of 2 for archive_windir", > "Debug: Facter: value for archive_windir is still nil", > "Debug: Facter: value for netmask6_ovs_system is still nil", > "Debug: Facter: value for libvirt_uuid is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iptables_persistent_version", > "Debug: Facter: value for iptables_persistent_version is still nil", > "Debug: hiera(): Hiera JSON backend starting", > "Debug: hiera(): Looking up step in JSON backend", > "Debug: hiera(): Looking for data source 0676C062-DA45-4183-B30B-258508663445", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/0676C062-DA45-4183-B30B-258508663445.json, skipping", > "Debug: hiera(): Looking for data source heat_config_", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/heat_config_.json, skipping", > "Debug: hiera(): Looking for data source config_step", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/trusted_cas.pp' in environment production", > "Debug: Automatically imported tripleo::trusted_cas from tripleo/trusted_cas into production", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Debug: hiera(): Looking up lookup_options in JSON backend", > "Debug: hiera(): Looking for data source controller_extraconfig", > "Debug: hiera(): Looking for data source extraconfig", > "Debug: hiera(): Looking for data source service_names", > "Debug: hiera(): Looking for data source service_configs", > "Debug: hiera(): Looking for data source controller", > "Debug: hiera(): Looking for data source bootstrap_node", > "Debug: hiera(): Looking for data source all_nodes", > "Debug: hiera(): Looking for data source vip_data", > "Debug: hiera(): Looking for data source net_ip_map", > "Debug: hiera(): Looking for data source RedHat", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/RedHat.json, skipping", > "Debug: hiera(): Looking for data source neutron_bigswitch_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_bigswitch_data.json, skipping", > "Debug: hiera(): Looking for data source neutron_cisco_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_cisco_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_n1kv_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_n1kv_data.json, skipping", > "Debug: hiera(): Looking for data source midonet_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/midonet_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_aci_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_aci_data.json, skipping", > "Debug: hiera(): Looking up tripleo::trusted_cas::ca_map in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/docker.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::docker from tripleo/profile/base/docker into production", > "Debug: hiera(): Looking up tripleo::profile::base::docker::insecure_registries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::registry_mirror in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::docker_options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::additional_sockets in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::configure_network in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::network_options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::configure_storage in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::storage_options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::debug in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::deployment_user in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::insecure_registry_address in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::docker_namespace in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::insecure_registry in JSON backend", > "Debug: hiera(): Looking up deployment_user in JSON backend", > "Debug: importing '/etc/puppet/modules/sysctl/manifests/value.pp' in environment production", > "Debug: Automatically imported sysctl::value from sysctl/value into production", > "Debug: Resource group[docker] was not determined to be defined", > "Debug: Create new resource group[docker] with params {\"ensure\"=>\"present\"}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/kernel.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::kernel from tripleo/profile/base/kernel into production", > "Debug: hiera(): Looking up tripleo::profile::base::kernel::module_list in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::kernel::sysctl_settings in JSON backend", > "Debug: hiera(): Looking up kernel_modules in JSON backend", > "Debug: hiera(): Looking up sysctl_settings in JSON backend", > "Debug: importing '/etc/puppet/modules/kmod/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/kmod/manifests/load.pp' in environment production", > "Debug: Automatically imported kmod::load from kmod/load into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::database::mysql::client from tripleo/profile/base/database/mysql/client into production", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::enable_ssl in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::mysql_read_default_file in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::mysql_read_default_group in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::mysql_client_bind_address in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::ssl_ca in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::step in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::time::ntp from tripleo/profile/base/time/ntp into production", > "Debug: importing '/etc/puppet/modules/ntp/manifests/init.pp' in environment production", > "Debug: Automatically imported ntp from ntp into production", > "Debug: importing '/etc/puppet/modules/ntp/manifests/params.pp' in environment production", > "Debug: Automatically imported ntp::params from ntp/params into production", > "Debug: hiera(): Looking up ntp::autoupdate in JSON backend", > "Debug: hiera(): Looking up ntp::broadcastclient in JSON backend", > "Debug: hiera(): Looking up ntp::config in JSON backend", > "Debug: hiera(): Looking up ntp::config_dir in JSON backend", > "Debug: hiera(): Looking up ntp::config_file_mode in JSON backend", > "Debug: hiera(): Looking up ntp::config_template in JSON backend", > "Debug: hiera(): Looking up ntp::disable_auth in JSON backend", > "Debug: hiera(): Looking up ntp::disable_dhclient in JSON backend", > "Debug: hiera(): Looking up ntp::disable_kernel in JSON backend", > "Debug: hiera(): Looking up ntp::disable_monitor in JSON backend", > "Debug: hiera(): Looking up ntp::fudge in JSON backend", > "Debug: hiera(): Looking up ntp::driftfile in JSON backend", > "Debug: hiera(): Looking up ntp::leapfile in JSON backend", > "Debug: hiera(): Looking up ntp::logfile in JSON backend", > "Debug: hiera(): Looking up ntp::iburst_enable in JSON backend", > "Debug: hiera(): Looking up ntp::keys in JSON backend", > "Debug: hiera(): Looking up ntp::keys_enable in JSON backend", > "Debug: hiera(): Looking up ntp::keys_file in JSON backend", > "Debug: hiera(): Looking up ntp::keys_controlkey in JSON backend", > "Debug: hiera(): Looking up ntp::keys_requestkey in JSON backend", > "Debug: hiera(): Looking up ntp::keys_trusted in JSON backend", > "Debug: hiera(): Looking up ntp::minpoll in JSON backend", > "Debug: hiera(): Looking up ntp::maxpoll in JSON backend", > "Debug: hiera(): Looking up ntp::package_ensure in JSON backend", > "Debug: hiera(): Looking up ntp::package_manage in JSON backend", > "Debug: hiera(): Looking up ntp::package_name in JSON backend", > "Debug: hiera(): Looking up ntp::panic in JSON backend", > "Debug: hiera(): Looking up ntp::peers in JSON backend", > "Debug: hiera(): Looking up ntp::preferred_servers in JSON backend", > "Debug: hiera(): Looking up ntp::restrict in JSON backend", > "Debug: hiera(): Looking up ntp::interfaces in JSON backend", > "Debug: hiera(): Looking up ntp::interfaces_ignore in JSON backend", > "Debug: hiera(): Looking up ntp::servers in JSON backend", > "Debug: hiera(): Looking up ntp::service_enable in JSON backend", > "Debug: hiera(): Looking up ntp::service_ensure in JSON backend", > "Debug: hiera(): Looking up ntp::service_manage in JSON backend", > "Debug: hiera(): Looking up ntp::service_name in JSON backend", > "Debug: hiera(): Looking up ntp::service_provider in JSON backend", > "Debug: hiera(): Looking up ntp::stepout in JSON backend", > "Debug: hiera(): Looking up ntp::tinker in JSON backend", > "Debug: hiera(): Looking up ntp::tos in JSON backend", > "Debug: hiera(): Looking up ntp::tos_minclock in JSON backend", > "Debug: hiera(): Looking up ntp::tos_minsane in JSON backend", > "Debug: hiera(): Looking up ntp::tos_floor in JSON backend", > "Debug: hiera(): Looking up ntp::tos_ceiling in JSON backend", > "Debug: hiera(): Looking up ntp::tos_cohort in JSON backend", > "Debug: hiera(): Looking up ntp::udlc in JSON backend", > "Debug: hiera(): Looking up ntp::udlc_stratum in JSON backend", > "Debug: hiera(): Looking up ntp::ntpsigndsocket in JSON backend", > "Debug: hiera(): Looking up ntp::authprov in JSON backend", > "Debug: importing '/etc/puppet/modules/ntp/manifests/install.pp' in environment production", > "Debug: Automatically imported ntp::install from ntp/install into production", > "Debug: importing '/etc/puppet/modules/ntp/manifests/config.pp' in environment production", > "Debug: Automatically imported ntp::config from ntp/config into production", > "Debug: Scope(Class[Ntp::Config]): Retrieving template ntp/ntp.conf.erb", > "Debug: template[/etc/puppet/modules/ntp/templates/ntp.conf.erb]: Bound template variables for /etc/puppet/modules/ntp/templates/ntp.conf.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/ntp/templates/ntp.conf.erb]: Interpolated template /etc/puppet/modules/ntp/templates/ntp.conf.erb in 0.00 seconds", > "Debug: importing '/etc/puppet/modules/ntp/manifests/service.pp' in environment production", > "Debug: Automatically imported ntp::service from ntp/service into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/pacemaker.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::pacemaker from tripleo/profile/base/pacemaker into production", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_node_ips in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_authkey in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::encryption in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::enable_instanceha in JSON backend", > "Debug: hiera(): Looking up pcs_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_node_ips in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker_cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::instanceha in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up enable_fencing in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_node_names in JSON backend", > "Debug: hiera(): Looking up corosync_ipv6 in JSON backend", > "Debug: hiera(): Looking up corosync_token_timeout in JSON backend", > "Debug: hiera(): Looking up hacluster_pwd in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/init.pp' in environment production", > "Debug: Automatically imported pacemaker from pacemaker into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/params.pp' in environment production", > "Debug: Automatically imported pacemaker::params from pacemaker/params into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/install.pp' in environment production", > "Debug: Automatically imported pacemaker::install from pacemaker/install into production", > "Debug: hiera(): Looking up pacemaker::install::ensure in JSON backend", > "Debug: Resource package[pacemaker] was not determined to be defined", > "Debug: Create new resource package[pacemaker] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pcs] was not determined to be defined", > "Debug: Create new resource package[pcs] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[fence-agents-all] was not determined to be defined", > "Debug: Create new resource package[fence-agents-all] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pacemaker-libs] was not determined to be defined", > "Debug: Create new resource package[pacemaker-libs] with params {\"ensure\"=>\"present\"}", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/service.pp' in environment production", > "Debug: Automatically imported pacemaker::service from pacemaker/service into production", > "Debug: hiera(): Looking up pacemaker::service::ensure in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasstatus in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasrestart in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/corosync.pp' in environment production", > "Debug: Automatically imported pacemaker::corosync from pacemaker/corosync into production", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_members_rrp in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_name in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::manage_fw in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::pcsd_debug in JSON backend", > "Debug: hiera(): Looking up docker_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/systemd/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/systemctl/daemon_reload.pp' in environment production", > "Debug: Automatically imported systemd::systemctl::daemon_reload from systemd/systemctl/daemon_reload into production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/unit_file.pp' in environment production", > "Debug: importing '/etc/puppet/modules/stdlib/manifests/init.pp' in environment production", > "Debug: Automatically imported systemd::unit_file from systemd/unit_file into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/snmp.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::snmp from tripleo/profile/base/snmp into production", > "Debug: hiera(): Looking up tripleo::profile::base::snmp::snmpd_config in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::snmp::snmpd_password in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::snmp::snmpd_user in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::snmp::step in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/sshd.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::sshd from tripleo/profile/base/sshd into production", > "Debug: hiera(): Looking up tripleo::profile::base::sshd::bannertext in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::sshd::motd in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::sshd::options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::sshd::port in JSON backend", > "Debug: hiera(): Looking up ssh:server::options in JSON backend", > "Debug: importing '/etc/puppet/modules/ssh/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/ssh/manifests/server.pp' in environment production", > "Debug: Automatically imported ssh::server from ssh/server into production", > "Debug: importing '/etc/puppet/modules/ssh/manifests/params.pp' in environment production", > "Debug: Automatically imported ssh::params from ssh/params into production", > "Debug: hiera(): Looking up ssh::server::ensure in JSON backend", > "Debug: hiera(): Looking up ssh::server::validate_sshd_file in JSON backend", > "Debug: hiera(): Looking up ssh::server::use_augeas in JSON backend", > "Debug: hiera(): Looking up ssh::server::options_absent in JSON backend", > "Debug: hiera(): Looking up ssh::server::match_block in JSON backend", > "Debug: hiera(): Looking up ssh::server::use_issue_net in JSON backend", > "Debug: hiera(): Looking up ssh::server::options in JSON backend", > "Debug: importing '/etc/puppet/modules/ssh/manifests/server/install.pp' in environment production", > "Debug: Automatically imported ssh::server::install from ssh/server/install into production", > "Debug: importing '/etc/puppet/modules/ssh/manifests/server/config.pp' in environment production", > "Debug: Automatically imported ssh::server::config from ssh/server/config into production", > "Debug: importing '/etc/puppet/modules/concat/manifests/init.pp' in environment production", > "Debug: Automatically imported concat from concat into production", > "Debug: Scope(Class[Ssh::Server::Config]): Retrieving template ssh/sshd_config.erb", > "Debug: template[/etc/puppet/modules/ssh/templates/sshd_config.erb]: Bound template variables for /etc/puppet/modules/ssh/templates/sshd_config.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/ssh/templates/sshd_config.erb]: Interpolated template /etc/puppet/modules/ssh/templates/sshd_config.erb in 0.00 seconds", > "Debug: importing '/etc/puppet/modules/concat/manifests/fragment.pp' in environment production", > "Debug: Automatically imported concat::fragment from concat/fragment into production", > "Debug: importing '/etc/puppet/modules/ssh/manifests/server/service.pp' in environment production", > "Debug: Automatically imported ssh::server::service from ssh/server/service into production", > "Debug: hiera(): Looking up ssh::server::service::ensure in JSON backend", > "Debug: hiera(): Looking up ssh::server::service::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/timezone/manifests/init.pp' in environment production", > "Debug: Automatically imported timezone from timezone into production", > "Debug: hiera(): Looking up timezone::timezone in JSON backend", > "Debug: hiera(): Looking up timezone::ensure in JSON backend", > "Debug: hiera(): Looking up timezone::hwutc in JSON backend", > "Debug: hiera(): Looking up timezone::autoupgrade in JSON backend", > "Debug: hiera(): Looking up timezone::notify_services in JSON backend", > "Debug: hiera(): Looking up timezone::package in JSON backend", > "Debug: hiera(): Looking up timezone::zoneinfo_dir in JSON backend", > "Debug: hiera(): Looking up timezone::localtime_file in JSON backend", > "Debug: hiera(): Looking up timezone::timezone_file in JSON backend", > "Debug: hiera(): Looking up timezone::timezone_file_template in JSON backend", > "Debug: hiera(): Looking up timezone::timezone_file_supports_comment in JSON backend", > "Debug: hiera(): Looking up timezone::timezone_update in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall.pp' in environment production", > "Debug: Automatically imported tripleo::firewall from tripleo/firewall into production", > "Debug: hiera(): Looking up tripleo::firewall::manage_firewall in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_chains in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::purge_firewall_chains in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::purge_firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_pre_extras in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_post_extras in JSON backend", > "Debug: Resource class[tripleo::firewall::pre] was not determined to be defined", > "Debug: Create new resource class[tripleo::firewall::pre] with params {\"firewall_settings\"=>{}}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/pre.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::pre from tripleo/firewall/pre into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/init.pp' in environment production", > "Debug: Automatically imported firewall from firewall into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/params.pp' in environment production", > "Debug: Automatically imported firewall::params from firewall/params into production", > "Debug: hiera(): Looking up firewall::ensure in JSON backend", > "Debug: hiera(): Looking up firewall::ensure_v6 in JSON backend", > "Debug: hiera(): Looking up firewall::pkg_ensure in JSON backend", > "Debug: hiera(): Looking up firewall::service_name in JSON backend", > "Debug: hiera(): Looking up firewall::service_name_v6 in JSON backend", > "Debug: hiera(): Looking up firewall::package_name in JSON backend", > "Debug: hiera(): Looking up firewall::ebtables_manage in JSON backend", > "Debug: importing '/etc/puppet/modules/firewall/manifests/linux.pp' in environment production", > "Debug: Automatically imported firewall::linux from firewall/linux into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/linux/redhat.pp' in environment production", > "Debug: Automatically imported firewall::linux::redhat from firewall/linux/redhat into production", > "Debug: hiera(): Looking up firewall::linux::redhat::package_ensure in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/rule.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::rule from tripleo/firewall/rule into production", > "Debug: Resource class[tripleo::firewall::post] was not determined to be defined", > "Debug: Create new resource class[tripleo::firewall::post] with params {\"firewall_settings\"=>{}}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/post.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::post from tripleo/firewall/post into production", > "Debug: hiera(): Looking up tripleo::firewall::post::debug in JSON backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Debug: hiera(): Looking up service_names in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/service_rules.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::service_rules from tripleo/firewall/service_rules into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/packages.pp' in environment production", > "Debug: Automatically imported tripleo::packages from tripleo/packages into production", > "Debug: hiera(): Looking up tripleo::packages::enable_install in JSON backend", > "Debug: hiera(): Looking up tripleo::packages::enable_upgrade in JSON backend", > "Debug: importing '/etc/puppet/modules/stdlib/manifests/stages.pp' in environment production", > "Debug: Automatically imported stdlib::stages from stdlib/stages into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/tuned.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::tuned from tripleo/profile/base/tuned into production", > "Debug: hiera(): Looking up tripleo::profile::base::tuned::profile in JSON backend", > "Debug: Resource package[tuned] was not determined to be defined", > "Debug: Create new resource package[tuned] with params {\"ensure\"=>\"present\"}", > "Debug: Scope(Kmod::Load[nf_conntrack]): Retrieving template kmod/redhat.modprobe.erb", > "Debug: template[/etc/puppet/modules/kmod/templates/redhat.modprobe.erb]: Bound template variables for /etc/puppet/modules/kmod/templates/redhat.modprobe.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/kmod/templates/redhat.modprobe.erb]: Interpolated template /etc/puppet/modules/kmod/templates/redhat.modprobe.erb in 0.00 seconds", > "Debug: Scope(Kmod::Load[nf_conntrack_proto_sctp]): Retrieving template kmod/redhat.modprobe.erb", > "Debug: importing '/etc/puppet/modules/sysctl/manifests/base.pp' in environment production", > "Debug: Automatically imported sysctl::base from sysctl/base into production", > "Debug: template[inline]: Bound template variables for inline template in 0.00 seconds", > "Debug: template[inline]: Interpolated template inline template in 0.00 seconds", > "Debug: template[inline]: Bound template variables for inline template in 0.04 seconds", > "Debug: hiera(): Looking up systemd::service_limits in JSON backend", > "Debug: hiera(): Looking up systemd::manage_resolved in JSON backend", > "Debug: hiera(): Looking up systemd::resolved_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_networkd in JSON backend", > "Debug: hiera(): Looking up systemd::networkd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_timesyncd in JSON backend", > "Debug: hiera(): Looking up systemd::timesyncd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::ntp_server in JSON backend", > "Debug: hiera(): Looking up systemd::fallback_ntp_server in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_evaluator.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_evaluator::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_listener.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_listener::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_notifier.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_notifier::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ca_certs.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ca_certs::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_api_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_api_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_collector_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_collector_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_expirer_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_expirer_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_central.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_central::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_notification.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_notification::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mgr.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mgr::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mon.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mon::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_scheduler.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_scheduler::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_volume.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_volume::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.clustercheck.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::clustercheck::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.docker.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::docker::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_registry_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_registry_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_metricd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_metricd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_statsd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_statsd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.haproxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cloudwatch_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cloudwatch_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cfn.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cfn::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_engine.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_engine::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.horizon.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::horizon::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.iscsid.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::iscsid::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.kernel.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::kernel::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.keystone.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::keystone::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.memcached.firewall_rules in JSON backend", > "Debug: hiera(): Looking up memcached_network in JSON backend", > "Debug: hiera(): Looking up tripleo::memcached::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mongodb_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mongodb_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql_client.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql_client::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_plugin_ml2.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_plugin_ml2::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_dhcp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_dhcp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_l3.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_l3::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_metadata.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_metadata::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_ovs_agent.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_ovs_agent::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_conductor.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_conductor::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_consoleauth.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_consoleauth::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_metadata.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_metadata::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_placement.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_placement::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_scheduler.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_scheduler::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_vnc_proxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_vnc_proxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ntp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ntp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.logrotate_crond.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::logrotate_crond::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.pacemaker.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::pacemaker::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.panko_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::panko_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.rabbitmq.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::rabbitmq::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.redis.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::redis::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.snmp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up snmpd_network in JSON backend", > "Debug: hiera(): Looking up tripleo::snmp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.sshd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::sshd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_proxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_proxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_ringbuilder.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_ringbuilder::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_storage.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_storage::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.timezone.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::timezone::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_firewall.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_firewall::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_packages.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_packages::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tuned.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tuned::firewall_rules in JSON backend", > "Debug: Adding relationship from Sysctl::Value[net.ipv4.ip_forward] to Package[docker] with 'before'", > "Debug: Adding relationship from File[/etc/systemd/system/docker.service.d] to File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf] with 'before'", > "Debug: Adding relationship from File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf] to Exec[systemd daemon-reload] with 'notify'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[fs.inotify.max_user_instances] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[fs.suid_dumpable] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[kernel.dmesg_restrict] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[kernel.pid_max] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.core.netdev_max_backlog] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.all.arp_accept] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.all.log_martians] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.all.secure_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.all.send_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.default.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.default.log_martians] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.default.secure_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.default.send_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.ip_forward] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.neigh.default.gc_thresh1] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.neigh.default.gc_thresh2] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.neigh.default.gc_thresh3] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.tcp_keepalive_intvl] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.tcp_keepalive_probes] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.tcp_keepalive_time] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.all.accept_ra] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.all.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.all.autoconf] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.all.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.default.accept_ra] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.default.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.default.autoconf] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.default.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.lo.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.netfilter.nf_conntrack_max] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.nf_conntrack_max] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[fs.inotify.max_user_instances] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[fs.suid_dumpable] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[kernel.dmesg_restrict] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[kernel.pid_max] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.core.netdev_max_backlog] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.all.arp_accept] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.all.log_martians] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.all.secure_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.all.send_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.default.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.default.log_martians] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.default.secure_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.default.send_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.ip_forward] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.neigh.default.gc_thresh1] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.neigh.default.gc_thresh2] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.neigh.default.gc_thresh3] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.tcp_keepalive_intvl] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.tcp_keepalive_probes] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.tcp_keepalive_time] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.all.accept_ra] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.all.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.all.autoconf] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.all.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.default.accept_ra] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.default.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.default.autoconf] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.default.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.lo.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.netfilter.nf_conntrack_max] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.nf_conntrack_max] with 'before'", > "Debug: Adding relationship from Anchor[ntp::begin] to Class[Ntp::Install] with 'before'", > "Debug: Adding relationship from Class[Ntp::Install] to Class[Ntp::Config] with 'before'", > "Debug: Adding relationship from Class[Ntp::Config] to Class[Ntp::Service] with 'notify'", > "Debug: Adding relationship from Class[Ntp::Service] to Anchor[ntp::end] with 'before'", > "Debug: Adding relationship from Service[pcsd] to Exec[auth-successful-across-all-nodes] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from Class[Pacemaker] to Class[Pacemaker::Corosync] with 'before'", > "Debug: Adding relationship from File[/etc/systemd/system/resource-agents-deps.target.wants] to Systemd::Unit_file[docker.service] with 'before'", > "Debug: Adding relationship from Systemd::Unit_file[docker.service] to Class[Systemd::Systemctl::Daemon_reload] with 'notify'", > "Debug: Adding relationship from Anchor[ssh::server::start] to Class[Ssh::Server::Install] with 'before'", > "Debug: Adding relationship from Class[Ssh::Server::Install] to Class[Ssh::Server::Config] with 'before'", > "Debug: Adding relationship from Class[Ssh::Server::Config] to Class[Ssh::Server::Service] with 'notify'", > "Debug: Adding relationship from Class[Ssh::Server::Service] to Anchor[ssh::server::end] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[docker] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[chronyd] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[ntp] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[pcsd] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[corosync] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[pacemaker] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[sshd] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[firewalld] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[iptables] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[ip6tables] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Stage[runtime] to Stage[setup_infra] with 'before'", > "Debug: Adding relationship from Stage[setup_infra] to Stage[deploy_infra] with 'before'", > "Debug: Adding relationship from Stage[deploy_infra] to Stage[setup_app] with 'before'", > "Debug: Adding relationship from Stage[setup_app] to Stage[deploy_app] with 'before'", > "Debug: Adding relationship from Stage[deploy_app] to Stage[deploy] with 'before'", > "Notice: Compiled catalog for controller-1.localdomain in environment production in 4.39 seconds", > "Debug: /File[/etc/systemd/system/docker.service.d]/seluser: Found seluser default 'system_u' for /etc/systemd/system/docker.service.d", > "Debug: /File[/etc/systemd/system/docker.service.d]/selrole: Found selrole default 'object_r' for /etc/systemd/system/docker.service.d", > "Debug: /File[/etc/systemd/system/docker.service.d]/seltype: Found seltype default 'container_unit_file_t' for /etc/systemd/system/docker.service.d", > "Debug: /File[/etc/systemd/system/docker.service.d]/selrange: Found selrange default 's0' for /etc/systemd/system/docker.service.d", > "Debug: /File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/seluser: Found seluser default 'system_u' for /etc/systemd/system/docker.service.d/99-unset-mountflags.conf", > "Debug: /File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/selrole: Found selrole default 'object_r' for /etc/systemd/system/docker.service.d/99-unset-mountflags.conf", > "Debug: /File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/seltype: Found seltype default 'container_unit_file_t' for /etc/systemd/system/docker.service.d/99-unset-mountflags.conf", > "Debug: /File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/selrange: Found selrange default 's0' for /etc/systemd/system/docker.service.d/99-unset-mountflags.conf", > "Debug: /File[/etc/docker/daemon.json]/seluser: Found seluser default 'system_u' for /etc/docker/daemon.json", > "Debug: /File[/etc/docker/daemon.json]/selrole: Found selrole default 'object_r' for /etc/docker/daemon.json", > "Debug: /File[/etc/docker/daemon.json]/seltype: Found seltype default 'container_config_t' for /etc/docker/daemon.json", > "Debug: /File[/etc/docker/daemon.json]/selrange: Found selrange default 's0' for /etc/docker/daemon.json", > "Debug: /File[/var/lib/openstack]/seluser: Found seluser default 'system_u' for /var/lib/openstack", > "Debug: /File[/var/lib/openstack]/selrole: Found selrole default 'object_r' for /var/lib/openstack", > "Debug: /File[/var/lib/openstack]/seltype: Found seltype default 'var_lib_t' for /var/lib/openstack", > "Debug: /File[/var/lib/openstack]/selrange: Found selrange default 's0' for /var/lib/openstack", > "Debug: /File[/etc/ntp.conf]/seluser: Found seluser default 'system_u' for /etc/ntp.conf", > "Debug: /File[/etc/ntp.conf]/selrole: Found selrole default 'object_r' for /etc/ntp.conf", > "Debug: /File[/etc/ntp.conf]/seltype: Found seltype default 'net_conf_t' for /etc/ntp.conf", > "Debug: /File[/etc/ntp.conf]/selrange: Found selrange default 's0' for /etc/ntp.conf", > "Debug: /File[etc-pacemaker]/seluser: Found seluser default 'system_u' for /etc/pacemaker", > "Debug: /File[etc-pacemaker]/selrole: Found selrole default 'object_r' for /etc/pacemaker", > "Debug: /File[etc-pacemaker]/seltype: Found seltype default 'etc_t' for /etc/pacemaker", > "Debug: /File[etc-pacemaker]/selrange: Found selrange default 's0' for /etc/pacemaker", > "Debug: /File[etc-pacemaker-authkey]/seluser: Found seluser default 'system_u' for /etc/pacemaker/authkey", > "Debug: /File[etc-pacemaker-authkey]/selrole: Found selrole default 'object_r' for /etc/pacemaker/authkey", > "Debug: /File[etc-pacemaker-authkey]/seltype: Found seltype default 'etc_t' for /etc/pacemaker/authkey", > "Debug: /File[etc-pacemaker-authkey]/selrange: Found selrange default 's0' for /etc/pacemaker/authkey", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants]/seluser: Found seluser default 'system_u' for /etc/systemd/system/resource-agents-deps.target.wants", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants]/selrole: Found selrole default 'object_r' for /etc/systemd/system/resource-agents-deps.target.wants", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants]/seltype: Found seltype default 'systemd_unit_file_t' for /etc/systemd/system/resource-agents-deps.target.wants", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants]/selrange: Found selrange default 's0' for /etc/systemd/system/resource-agents-deps.target.wants", > "Debug: /File[/etc/localtime]/seluser: Found seluser default 'system_u' for /etc/localtime", > "Debug: /File[/etc/localtime]/selrole: Found selrole default 'object_r' for /etc/localtime", > "Debug: /File[/etc/localtime]/seltype: Found seltype default 'locale_t' for /etc/localtime", > "Debug: /File[/etc/localtime]/selrange: Found selrange default 's0' for /etc/localtime", > "Debug: /File[/etc/sysconfig/iptables]/seluser: Found seluser default 'system_u' for /etc/sysconfig/iptables", > "Debug: /File[/etc/sysconfig/iptables]/selrole: Found selrole default 'object_r' for /etc/sysconfig/iptables", > "Debug: /File[/etc/sysconfig/iptables]/seltype: Found seltype default 'system_conf_t' for /etc/sysconfig/iptables", > "Debug: /File[/etc/sysconfig/iptables]/selrange: Found selrange default 's0' for /etc/sysconfig/iptables", > "Debug: /File[/etc/sysconfig/ip6tables]/seluser: Found seluser default 'system_u' for /etc/sysconfig/ip6tables", > "Debug: /File[/etc/sysconfig/ip6tables]/selrole: Found selrole default 'object_r' for /etc/sysconfig/ip6tables", > "Debug: /File[/etc/sysconfig/ip6tables]/seltype: Found seltype default 'system_conf_t' for /etc/sysconfig/ip6tables", > "Debug: /File[/etc/sysconfig/ip6tables]/selrange: Found selrange default 's0' for /etc/sysconfig/ip6tables", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack.modules]/seluser: Found seluser default 'system_u' for /etc/sysconfig/modules/nf_conntrack.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack.modules]/selrole: Found selrole default 'object_r' for /etc/sysconfig/modules/nf_conntrack.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack.modules]/seltype: Found seltype default 'etc_t' for /etc/sysconfig/modules/nf_conntrack.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack.modules]/selrange: Found selrange default 's0' for /etc/sysconfig/modules/nf_conntrack.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/seluser: Found seluser default 'system_u' for /etc/sysconfig/modules/nf_conntrack_proto_sctp.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/selrole: Found selrole default 'object_r' for /etc/sysconfig/modules/nf_conntrack_proto_sctp.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/seltype: Found seltype default 'etc_t' for /etc/sysconfig/modules/nf_conntrack_proto_sctp.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/selrange: Found selrange default 's0' for /etc/sysconfig/modules/nf_conntrack_proto_sctp.modules", > "Debug: /File[/etc/sysctl.conf]/seluser: Found seluser default 'system_u' for /etc/sysctl.conf", > "Debug: /File[/etc/sysctl.conf]/selrole: Found selrole default 'object_r' for /etc/sysctl.conf", > "Debug: /File[/etc/sysctl.conf]/seltype: Found seltype default 'system_conf_t' for /etc/sysctl.conf", > "Debug: /File[/etc/sysctl.conf]/selrange: Found selrange default 's0' for /etc/sysctl.conf", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/seluser: Found seluser default 'system_u' for /etc/systemd/system/resource-agents-deps.target.wants/docker.service", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/selrole: Found selrole default 'object_r' for /etc/systemd/system/resource-agents-deps.target.wants/docker.service", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/seltype: Found seltype default 'systemd_unit_file_t' for /etc/systemd/system/resource-agents-deps.target.wants/docker.service", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/selrange: Found selrange default 's0' for /etc/systemd/system/resource-agents-deps.target.wants/docker.service", > "Debug: /Firewall[000 accept related established rules ipv4]: [validate]", > "Debug: /Firewall[000 accept related established rules ipv6]: [validate]", > "Debug: /Firewall[001 accept all icmp ipv4]: [validate]", > "Debug: /Firewall[001 accept all icmp ipv6]: [validate]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: [validate]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: [validate]", > "Debug: /Firewall[003 accept ssh ipv4]: [validate]", > "Debug: /Firewall[003 accept ssh ipv6]: [validate]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: [validate]", > "Debug: /Firewall[998 log all ipv4]: [validate]", > "Debug: /Firewall[998 log all ipv6]: [validate]", > "Debug: /Firewall[999 drop all ipv4]: [validate]", > "Debug: /Firewall[999 drop all ipv6]: [validate]", > "Debug: /Firewall[128 aodh-api ipv4]: [validate]", > "Debug: /Firewall[128 aodh-api ipv6]: [validate]", > "Debug: /Firewall[113 ceph_mgr ipv4]: [validate]", > "Debug: /Firewall[113 ceph_mgr ipv6]: [validate]", > "Debug: /Firewall[110 ceph_mon ipv4]: [validate]", > "Debug: /Firewall[110 ceph_mon ipv6]: [validate]", > "Debug: /Firewall[119 cinder ipv4]: [validate]", > "Debug: /Firewall[119 cinder ipv6]: [validate]", > "Debug: /Firewall[120 iscsi initiator ipv4]: [validate]", > "Debug: /Firewall[120 iscsi initiator ipv6]: [validate]", > "Debug: /Firewall[112 glance_api ipv4]: [validate]", > "Debug: /Firewall[112 glance_api ipv6]: [validate]", > "Debug: /Firewall[129 gnocchi-api ipv4]: [validate]", > "Debug: /Firewall[129 gnocchi-api ipv6]: [validate]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: [validate]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: [validate]", > "Debug: /Firewall[107 haproxy stats ipv4]: [validate]", > "Debug: /Firewall[107 haproxy stats ipv6]: [validate]", > "Debug: /Firewall[125 heat_api ipv4]: [validate]", > "Debug: /Firewall[125 heat_api ipv6]: [validate]", > "Debug: /Firewall[125 heat_cfn ipv4]: [validate]", > "Debug: /Firewall[125 heat_cfn ipv6]: [validate]", > "Debug: /Firewall[127 horizon ipv4]: [validate]", > "Debug: /Firewall[127 horizon ipv6]: [validate]", > "Debug: /Firewall[111 keystone ipv4]: [validate]", > "Debug: /Firewall[111 keystone ipv6]: [validate]", > "Debug: /Firewall[121 memcached ipv4]: [validate]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: [validate]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: [validate]", > "Debug: /Firewall[114 neutron api ipv4]: [validate]", > "Debug: /Firewall[114 neutron api ipv6]: [validate]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: [validate]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: [validate]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: [validate]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: [validate]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: [validate]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: [validate]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: [validate]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: [validate]", > "Debug: /Firewall[136 neutron gre networks ipv4]: [validate]", > "Debug: /Firewall[136 neutron gre networks ipv6]: [validate]", > "Debug: /Firewall[113 nova_api ipv4]: [validate]", > "Debug: /Firewall[113 nova_api ipv6]: [validate]", > "Debug: /Firewall[138 nova_placement ipv4]: [validate]", > "Debug: /Firewall[138 nova_placement ipv6]: [validate]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: [validate]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: [validate]", > "Debug: /Firewall[105 ntp ipv4]: [validate]", > "Debug: /Firewall[105 ntp ipv6]: [validate]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: [validate]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: [validate]", > "Debug: /Firewall[131 pacemaker udp ipv4]: [validate]", > "Debug: /Firewall[131 pacemaker udp ipv6]: [validate]", > "Debug: /Firewall[140 panko-api ipv4]: [validate]", > "Debug: /Firewall[140 panko-api ipv6]: [validate]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: [validate]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: [validate]", > "Debug: /Firewall[108 redis-bundle ipv4]: [validate]", > "Debug: /Firewall[108 redis-bundle ipv6]: [validate]", > "Debug: /Firewall[122 swift proxy ipv4]: [validate]", > "Debug: /Firewall[122 swift proxy ipv6]: [validate]", > "Debug: /Firewall[123 swift storage ipv4]: [validate]", > "Debug: /Firewall[123 swift storage ipv6]: [validate]", > "Debug: Creating default schedules", > "Debug: /File[/etc/ssh/sshd_config]/seluser: Found seluser default 'system_u' for /etc/ssh/sshd_config", > "Debug: /File[/etc/ssh/sshd_config]/selrole: Found selrole default 'object_r' for /etc/ssh/sshd_config", > "Debug: /File[/etc/ssh/sshd_config]/seltype: Found seltype default 'etc_t' for /etc/ssh/sshd_config", > "Debug: /File[/etc/ssh/sshd_config]/selrange: Found selrange default 's0' for /etc/ssh/sshd_config", > "Info: Applying configuration version '1534432867'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]/require: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]/before: subscribes to File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/notify: subscribes to Exec[systemd daemon-reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Exec[systemd daemon-reload]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]/require: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]/subscribe: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]/subscribe: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/docker/daemon.json]/require: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-mirror]/require: subscribes to File[/etc/docker/daemon.json]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-mirror]/subscribe: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-mirror]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]/require: subscribes to File[/etc/docker/daemon.json]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]/subscribe: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]/require: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]/require: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/var/lib/openstack]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/before: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/before: subscribes to Augeas[tripleo-mysql-client-conf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/before: subscribes to Class[Ntp]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Ntp/Anchor[ntp::begin]/before: subscribes to Class[Ntp::Install]", > "Debug: /Stage[main]/Ntp::Install/before: subscribes to Class[Ntp::Config]", > "Debug: /Stage[main]/Ntp::Config/notify: subscribes to Class[Ntp::Service]", > "Debug: /Stage[main]/Ntp::Service/before: subscribes to Anchor[ntp::end]", > "Debug: /Stage[main]/Ntp::Service/Service[ntp]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker/before: subscribes to Class[Pacemaker::Corosync]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Exec[auth-successful-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/before: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/notify: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/notify: subscribes to Exec[reauthenticate-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/require: subscribes to User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/before: subscribes to Systemd::Unit_file[docker.service]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/before: subscribes to Class[Pacemaker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Ssh::Server::Install/before: subscribes to Class[Ssh::Server::Config]", > "Debug: /Stage[main]/Ssh::Server::Config/notify: subscribes to Class[Ssh::Server::Service]", > "Debug: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/notify: subscribes to Service[sshd]", > "Debug: /Stage[main]/Ssh::Server::Service/before: subscribes to Anchor[ssh::server::end]", > "Debug: /Stage[main]/Ssh::Server::Service/Service[sshd]/require: subscribes to Class[Ssh::Server::Config]", > "Debug: /Stage[main]/Ssh::Server::Service/Service[sshd]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Ssh::Server/Anchor[ssh::server::start]/before: subscribes to Class[Ssh::Server::Install]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/require: subscribes to Package[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/require: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/subscribe: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/before: subscribes to Service[ip6tables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[setup]/before: subscribes to Stage[main]", > "Debug: /Stage[runtime]/require: subscribes to Stage[main]", > "Debug: /Stage[runtime]/before: subscribes to Stage[setup_infra]", > "Debug: /Stage[setup_infra]/before: subscribes to Stage[deploy_infra]", > "Debug: /Stage[deploy_infra]/before: subscribes to Stage[setup_app]", > "Debug: /Stage[setup_app]/before: subscribes to Stage[deploy_app]", > "Debug: /Stage[deploy_app]/before: subscribes to Stage[deploy]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Tuned/Exec[tuned-adm]/require: subscribes to Package[tuned]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[fs.inotify.max_user_instances]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[fs.suid_dumpable]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[kernel.dmesg_restrict]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[kernel.pid_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.core.netdev_max_backlog]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.all.arp_accept]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.all.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.all.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.all.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.default.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.default.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.default.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.ip_forward]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh1]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh2]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh3]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_intvl]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_probes]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_time]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.all.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.all.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.all.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.all.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.default.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.default.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.default.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.lo.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.netfilter.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[fs.inotify.max_user_instances]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[fs.suid_dumpable]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[kernel.dmesg_restrict]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[kernel.pid_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.core.netdev_max_backlog]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.all.arp_accept]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.all.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.all.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.all.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.default.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.default.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.default.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.ip_forward]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh1]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh2]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh3]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_intvl]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_probes]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_time]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.all.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.all.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.all.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.all.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.default.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.default.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.default.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.lo.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.netfilter.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/before: subscribes to Sysctl_runtime[fs.inotify.max_user_instances]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/before: subscribes to Sysctl_runtime[fs.suid_dumpable]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/before: subscribes to Sysctl_runtime[kernel.dmesg_restrict]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/before: subscribes to Sysctl_runtime[kernel.pid_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/before: subscribes to Sysctl_runtime[net.core.netdev_max_backlog]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/before: subscribes to Sysctl_runtime[net.ipv4.conf.all.arp_accept]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/before: subscribes to Sysctl_runtime[net.ipv4.conf.all.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/before: subscribes to Sysctl_runtime[net.ipv4.conf.all.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/before: subscribes to Sysctl_runtime[net.ipv4.conf.all.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/before: subscribes to Sysctl_runtime[net.ipv4.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/before: subscribes to Sysctl_runtime[net.ipv4.conf.default.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/before: subscribes to Sysctl_runtime[net.ipv4.conf.default.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/before: subscribes to Sysctl_runtime[net.ipv4.conf.default.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[net.ipv4.ip_forward]/before: subscribes to Sysctl_runtime[net.ipv4.ip_forward]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/before: subscribes to Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/before: subscribes to Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/before: subscribes to Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/before: subscribes to Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/before: subscribes to Sysctl_runtime[net.ipv4.tcp_keepalive_probes]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/before: subscribes to Sysctl_runtime[net.ipv4.tcp_keepalive_time]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/before: subscribes to Sysctl_runtime[net.ipv6.conf.all.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/before: subscribes to Sysctl_runtime[net.ipv6.conf.all.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/before: subscribes to Sysctl_runtime[net.ipv6.conf.all.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/before: subscribes to Sysctl_runtime[net.ipv6.conf.all.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/before: subscribes to Sysctl_runtime[net.ipv6.conf.default.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/before: subscribes to Sysctl_runtime[net.ipv6.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/before: subscribes to Sysctl_runtime[net.ipv6.conf.default.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/before: subscribes to Sysctl_runtime[net.ipv6.conf.default.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/before: subscribes to Sysctl_runtime[net.ipv6.conf.lo.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/before: subscribes to Sysctl_runtime[net.netfilter.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/before: subscribes to Sysctl_runtime[net.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/Concat_file[/etc/ssh/sshd_config]/before: subscribes to File[/etc/ssh/sshd_config]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Adding autorequire relationship with File[/etc/systemd/system/resource-agents-deps.target.wants]", > "Debug: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/Concat_file[/etc/ssh/sshd_config]: Skipping automatic relationship with File[/etc/ssh/sshd_config]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[998 log all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[998 log all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller1]/ensure: created", > "Debug: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller1]: The container Class[Main] will propagate my refresh event", > "Debug: Class[Main]: The container Stage[main] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/File[/var/lib/openstack]/ensure: created", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/File[/var/lib/openstack]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/var/lib/openstack]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/groupadd docker'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Group[docker]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Group[docker]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Exec[directory-create-etc-my.cnf.d](provider=posix): Executing check 'test -d /etc/my.cnf.d'", > "Debug: Executing: 'test -d /etc/my.cnf.d'", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): sending command 'set' with params [\"/files/etc/my.cnf.d/tripleo.cnf/tripleo/bind-address\", \"172.17.1.15\"]", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): sending command 'rm' with params [\"/files/etc/my.cnf.d/tripleo.cnf/tripleo/ssl\"]", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): sending command 'rm' with params [\"/files/etc/my.cnf.d/tripleo.cnf/tripleo/ssl-ca\"]", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): Files changed, should execute", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Debug: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]: The container Class[Tripleo::Profile::Base::Database::Mysql::Client] will propagate my refresh event", > "Debug: Class[Tripleo::Profile::Base::Database::Mysql::Client]: The container Stage[main] will propagate my refresh event", > "Debug: Executing: '/usr/bin/systemctl is-active chronyd'", > "Debug: Executing: '/usr/bin/systemctl is-enabled chronyd'", > "Debug: Executing: '/usr/bin/systemctl stop chronyd'", > "Debug: Executing: '/usr/bin/systemctl disable chronyd'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]: The container Class[Tripleo::Profile::Base::Time::Ntp] will propagate my refresh event", > "Debug: Class[Tripleo::Profile::Base::Time::Ntp]: The container Stage[main] will propagate my refresh event", > "Debug: Prefetching norpm resources for package", > "Debug: Executing: '/usr/bin/rpm -q ntp --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Info: Computing checksum on file /etc/ntp.conf", > "Info: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]: Filebucketed /etc/ntp.conf to puppet with sum 913c85f0fde85f83c2d6c030ecf259e9", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}537f072fe8f462b20e5e88f9121550b2'", > "Debug: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]: The container Class[Ntp::Config] will propagate my refresh event", > "Debug: Class[Ntp::Config]: The container Stage[main] will propagate my refresh event", > "Info: Class[Ntp::Config]: Scheduling refresh of Class[Ntp::Service]", > "Info: Class[Ntp::Service]: Scheduling refresh of Service[ntp]", > "Debug: Executing: '/usr/bin/systemctl is-active ntpd'", > "Debug: Executing: '/usr/bin/systemctl is-enabled ntpd'", > "Debug: Executing: '/usr/bin/systemctl unmask ntpd'", > "Debug: Executing: '/usr/bin/systemctl start ntpd'", > "Debug: Executing: '/usr/bin/systemctl enable ntpd'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Debug: /Stage[main]/Ntp::Service/Service[ntp]: The container Class[Ntp::Service] will propagate my refresh event", > "Info: /Stage[main]/Ntp::Service/Service[ntp]: Unscheduling refresh on Service[ntp]", > "Debug: Class[Ntp::Service]: The container Stage[main] will propagate my refresh event", > "Debug: Executing: '/usr/bin/rpm -q pacemaker --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/rpm -q pcs --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/rpm -q fence-agents-all --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/rpm -q pacemaker-libs --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/systemctl is-enabled corosync'", > "Debug: Executing: '/usr/bin/systemctl unmask corosync'", > "Debug: Executing: '/usr/bin/systemctl enable corosync'", > "Notice: /Stage[main]/Pacemaker::Service/Service[corosync]/enable: enable changed 'false' to 'true'", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: The container Class[Pacemaker::Service] will propagate my refresh event", > "Debug: Executing: '/usr/bin/systemctl is-enabled pacemaker'", > "Debug: Executing: '/usr/bin/systemctl unmask pacemaker'", > "Debug: Executing: '/usr/bin/systemctl enable pacemaker'", > "Notice: /Stage[main]/Pacemaker::Service/Service[pacemaker]/enable: enable changed 'false' to 'true'", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: The container Class[Pacemaker::Service] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]: The container Class[Tripleo::Profile::Base::Pacemaker] will propagate my refresh event", > "Debug: Executing: '/usr/bin/rpm -q openssh-server --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Debug: /Stage[main]/Timezone/File[/etc/localtime]: The container Class[Timezone] will propagate my refresh event", > "Debug: Class[Timezone]: The container Stage[main] will propagate my refresh event", > "Debug: Executing: '/usr/bin/rpm -q iptables --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/systemctl is-active firewalld'", > "Debug: Executing: '/usr/bin/systemctl is-enabled firewalld'", > "Debug: Executing: '/usr/bin/rpm -q iptables-services --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/systemctl is-active iptables'", > "Debug: Executing: '/usr/bin/systemctl is-enabled iptables'", > "Debug: Executing: '/usr/bin/systemctl unmask iptables'", > "Debug: Executing: '/usr/bin/systemctl start iptables'", > "Debug: Executing: '/usr/bin/systemctl enable iptables'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]: The container Class[Firewall::Linux::Redhat] will propagate my refresh event", > "Info: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]: Unscheduling refresh on Service[iptables]", > "Debug: Executing: '/usr/bin/systemctl is-active ip6tables'", > "Debug: Executing: '/usr/bin/systemctl is-enabled ip6tables'", > "Debug: Executing: '/usr/bin/systemctl unmask ip6tables'", > "Debug: Executing: '/usr/bin/systemctl start ip6tables'", > "Debug: Executing: '/usr/bin/systemctl enable ip6tables'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]: The container Class[Firewall::Linux::Redhat] will propagate my refresh event", > "Info: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]: Unscheduling refresh on Service[ip6tables]", > "Debug: Executing: '/usr/bin/rpm -q tuned --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Exec[tuned-adm](provider=posix): Executing check 'tuned-adm active | grep -q '''", > "Debug: Executing: 'tuned-adm active | grep -q '''", > "Debug: Exec[modprobe nf_conntrack](provider=posix): Executing check 'egrep -q '^nf_conntrack ' /proc/modules'", > "Debug: Executing: 'egrep -q '^nf_conntrack ' /proc/modules'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]: The container Kmod::Load[nf_conntrack] will propagate my refresh event", > "Debug: Kmod::Load[nf_conntrack]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Debug: Exec[modprobe nf_conntrack_proto_sctp](provider=posix): Executing check 'egrep -q '^nf_conntrack_proto_sctp ' /proc/modules'", > "Debug: Executing: 'egrep -q '^nf_conntrack_proto_sctp ' /proc/modules'", > "Debug: Exec[modprobe nf_conntrack_proto_sctp](provider=posix): Executing 'modprobe nf_conntrack_proto_sctp'", > "Debug: Executing: 'modprobe nf_conntrack_proto_sctp'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]: The container Kmod::Load[nf_conntrack_proto_sctp] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]: The container Kmod::Load[nf_conntrack_proto_sctp] will propagate my refresh event", > "Debug: Kmod::Load[nf_conntrack_proto_sctp]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Debug: Prefetching parsed resources for sysctl", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Debug: Flushing sysctl provider target /etc/sysctl.conf", > "Info: Computing checksum on file /etc/sysctl.conf", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]: The container Sysctl::Value[fs.inotify.max_user_instances] will propagate my refresh event", > "Debug: Prefetching sysctl_runtime resources for sysctl_runtime", > "Debug: Executing: '/usr/sbin/sysctl -a'", > "Debug: Executing: '/usr/sbin/sysctl fs.inotify.max_user_instances=1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]: The container Sysctl::Value[fs.inotify.max_user_instances] will propagate my refresh event", > "Debug: Sysctl::Value[fs.inotify.max_user_instances]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]: The container Sysctl::Value[fs.suid_dumpable] will propagate my refresh event", > "Debug: Sysctl::Value[fs.suid_dumpable]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]: The container Sysctl::Value[kernel.dmesg_restrict] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl kernel.dmesg_restrict=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]: The container Sysctl::Value[kernel.dmesg_restrict] will propagate my refresh event", > "Debug: Sysctl::Value[kernel.dmesg_restrict]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]: The container Sysctl::Value[kernel.pid_max] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl kernel.pid_max=1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]: The container Sysctl::Value[kernel.pid_max] will propagate my refresh event", > "Debug: Sysctl::Value[kernel.pid_max]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]: The container Sysctl::Value[net.core.netdev_max_backlog] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.core.netdev_max_backlog=10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]: The container Sysctl::Value[net.core.netdev_max_backlog] will propagate my refresh event", > "Debug: Sysctl::Value[net.core.netdev_max_backlog]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]: The container Sysctl::Value[net.ipv4.conf.all.arp_accept] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.all.arp_accept=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]: The container Sysctl::Value[net.ipv4.conf.all.arp_accept] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.all.arp_accept]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]: The container Sysctl::Value[net.ipv4.conf.all.log_martians] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.all.log_martians=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]: The container Sysctl::Value[net.ipv4.conf.all.log_martians] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.all.log_martians]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]: The container Sysctl::Value[net.ipv4.conf.all.secure_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.all.secure_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]: The container Sysctl::Value[net.ipv4.conf.all.secure_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.all.secure_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]: The container Sysctl::Value[net.ipv4.conf.all.send_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.all.send_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]: The container Sysctl::Value[net.ipv4.conf.all.send_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.all.send_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]: The container Sysctl::Value[net.ipv4.conf.default.accept_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.default.accept_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]: The container Sysctl::Value[net.ipv4.conf.default.accept_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.default.accept_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]: The container Sysctl::Value[net.ipv4.conf.default.log_martians] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.default.log_martians=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]: The container Sysctl::Value[net.ipv4.conf.default.log_martians] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.default.log_martians]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]: The container Sysctl::Value[net.ipv4.conf.default.secure_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.default.secure_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]: The container Sysctl::Value[net.ipv4.conf.default.secure_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.default.secure_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]: The container Sysctl::Value[net.ipv4.conf.default.send_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.default.send_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]: The container Sysctl::Value[net.ipv4.conf.default.send_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.default.send_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[net.ipv4.ip_forward]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[net.ipv4.ip_forward]: The container Sysctl::Value[net.ipv4.ip_forward] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.ip_forward=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/Sysctl_runtime[net.ipv4.ip_forward]/val: val changed '0' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/Sysctl_runtime[net.ipv4.ip_forward]: The container Sysctl::Value[net.ipv4.ip_forward] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.ip_forward]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Debug: Executing: '/usr/bin/rpm -q docker --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/ensure: defined content as '{md5}b984426de0b5978853686a649b64e4b8'", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]: Scheduling refresh of Exec[systemd daemon-reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Exec[systemd daemon-reload](provider=posix): Executing 'systemctl daemon-reload'", > "Debug: Executing: 'systemctl daemon-reload'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Exec[systemd daemon-reload]: Triggered 'refresh' from 1 events", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Exec[systemd daemon-reload]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Exec[systemd daemon-reload]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): sending command 'set' with params [\"/files/etc/sysconfig/docker/OPTIONS\", \"\\\"--log-driver=journald --signature-verification=false --iptables=false --live-restore -H unix:///run/docker.sock -H unix:///var/lib/openstack/docker.sock\\\"\"]", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): Files changed, should execute", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]/returns: executed successfully", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): sending command 'set' with params [\"/files/etc/sysconfig/docker/INSECURE_REGISTRY\", \"\\\"--insecure-registry 192.168.24.1:8787\\\"\"]", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): Files changed, should execute", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]/returns: executed successfully", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): sending command 'rm' with params [\"/files/etc/docker/daemon.json/dict/entry[. = \\\"registry-mirrors\\\"]\"]", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): Skipping because no files were changed", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): Closed the augeas connection", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): sending command 'set' with params [\"/files/etc/docker/daemon.json/dict/entry[. = \\\"debug\\\"]\", \"debug\"]", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): sending command 'set' with params [\"/files/etc/docker/daemon.json/dict/entry[. = \\\"debug\\\"]/const\", \"true\"]", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): Files changed, should execute", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]/returns: executed successfully", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): sending command 'set' with params [\"/files/etc/sysconfig/docker-storage/DOCKER_STORAGE_OPTIONS\", \"\\\" -s overlay2\\\"\"]", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): Files changed, should execute", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]/returns: executed successfully", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): sending command 'set' with params [\"/files/etc/sysconfig/docker-network/DOCKER_NETWORK_OPTIONS\", \"\\\" --bip=172.31.0.1/24\\\"\"]", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): Files changed, should execute", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]/returns: executed successfully", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Executing: '/usr/bin/systemctl is-active docker'", > "Debug: Executing: '/usr/bin/systemctl is-enabled docker'", > "Debug: Executing: '/usr/bin/systemctl unmask docker'", > "Debug: Executing: '/usr/bin/systemctl start docker'", > "Debug: Executing: '/usr/bin/systemctl enable docker'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]/ensure: ensure changed 'stopped' to 'running'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]: Unscheduling refresh on Service[docker]", > "Debug: Class[Tripleo::Profile::Base::Docker]: The container Stage[main] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh1] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.neigh.default.gc_thresh1=1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh1] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh2] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.neigh.default.gc_thresh2=2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh2] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh3] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.neigh.default.gc_thresh3=4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh3] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]: The container Sysctl::Value[net.ipv4.tcp_keepalive_intvl] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.tcp_keepalive_intvl=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]: The container Sysctl::Value[net.ipv4.tcp_keepalive_intvl] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.tcp_keepalive_intvl]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]: The container Sysctl::Value[net.ipv4.tcp_keepalive_probes] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.tcp_keepalive_probes=5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]: The container Sysctl::Value[net.ipv4.tcp_keepalive_probes] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.tcp_keepalive_probes]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]: The container Sysctl::Value[net.ipv4.tcp_keepalive_time] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.tcp_keepalive_time=5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]: The container Sysctl::Value[net.ipv4.tcp_keepalive_time] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.tcp_keepalive_time]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]: The container Sysctl::Value[net.ipv6.conf.all.accept_ra] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.all.accept_ra=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]: The container Sysctl::Value[net.ipv6.conf.all.accept_ra] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.all.accept_ra]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]: The container Sysctl::Value[net.ipv6.conf.all.accept_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.all.accept_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]: The container Sysctl::Value[net.ipv6.conf.all.accept_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.all.accept_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]: The container Sysctl::Value[net.ipv6.conf.all.autoconf] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.all.autoconf=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]: The container Sysctl::Value[net.ipv6.conf.all.autoconf] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.all.autoconf]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]: The container Sysctl::Value[net.ipv6.conf.all.disable_ipv6] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.all.disable_ipv6]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]: The container Sysctl::Value[net.ipv6.conf.default.accept_ra] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.default.accept_ra=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]: The container Sysctl::Value[net.ipv6.conf.default.accept_ra] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.default.accept_ra]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]: The container Sysctl::Value[net.ipv6.conf.default.accept_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.default.accept_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]: The container Sysctl::Value[net.ipv6.conf.default.accept_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.default.accept_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]: The container Sysctl::Value[net.ipv6.conf.default.autoconf] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.default.autoconf=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]: The container Sysctl::Value[net.ipv6.conf.default.autoconf] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.default.autoconf]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]: The container Sysctl::Value[net.ipv6.conf.default.disable_ipv6] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.default.disable_ipv6]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]: The container Sysctl::Value[net.ipv6.conf.lo.disable_ipv6] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]: The container Sysctl::Value[net.netfilter.nf_conntrack_max] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.netfilter.nf_conntrack_max=500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]: The container Sysctl::Value[net.netfilter.nf_conntrack_max] will propagate my refresh event", > "Debug: Sysctl::Value[net.netfilter.nf_conntrack_max]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]: The container Sysctl::Value[net.nf_conntrack_max] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.nf_conntrack_max=500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]: The container Sysctl::Value[net.nf_conntrack_max] will propagate my refresh event", > "Debug: Sysctl::Value[net.nf_conntrack_max]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Debug: Class[Tripleo::Profile::Base::Kernel]: The container Stage[main] will propagate my refresh event", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/mode: Not managing symlink mode", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Info: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Scheduling refresh of Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: The container Systemd::Unit_file[docker.service] will propagate my refresh event", > "Debug: Systemd::Unit_file[docker.service]: The container Class[Tripleo::Profile::Base::Pacemaker] will propagate my refresh event", > "Info: Systemd::Unit_file[docker.service]: Scheduling refresh of Class[Systemd::Systemctl::Daemon_reload]", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: The container Stage[main] will propagate my refresh event", > "Debug: Executing: '/usr/bin/systemctl is-active pcsd'", > "Debug: Executing: '/usr/bin/systemctl is-enabled pcsd'", > "Debug: Executing: '/usr/bin/systemctl unmask pcsd'", > "Debug: Executing: '/usr/bin/systemctl start pcsd'", > "Debug: Executing: '/usr/bin/systemctl enable pcsd'", > "Notice: /Stage[main]/Pacemaker::Service/Service[pcsd]/ensure: ensure changed 'stopped' to 'running'", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: The container Class[Pacemaker::Service] will propagate my refresh event", > "Info: /Stage[main]/Pacemaker::Service/Service[pcsd]: Unscheduling refresh on Service[pcsd]", > "Debug: Class[Pacemaker::Service]: The container Stage[main] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/usermod -p $6$BiHxiTrot9$22yeVFjOfYEVatKG2wyGdEHQIuRfSCFfVh1gxpt505m3HXVNXBpZrQcknLCF7cJRj8PXTIhqq1SlcwVaTImuM1 hacluster'", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/password: changed password", > "Debug: Executing: '/usr/sbin/usermod -G haclient hacluster'", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/groups: groups changed '' to ['haclient']", > "Info: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Scheduling refresh of Exec[reauthenticate-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/returns: Exec try 1/360", > "Debug: Exec[reauthenticate-across-all-nodes](provider=posix): Executing '/sbin/pcs cluster auth controller-0 controller-1 controller-2 -u hacluster -p a27rypXMwVPVqWHT --force'", > "Debug: Executing: '/sbin/pcs cluster auth controller-0 controller-1 controller-2 -u hacluster -p a27rypXMwVPVqWHT --force'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/returns: Sleeping for 10.0 seconds between tries", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/returns: Exec try 2/360", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Triggered 'refresh' from 2 events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/ensure: created", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/ensure: defined content as '{md5}0935666a8d0f9bd85e683dd1382bd797'", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: Exec[wait-for-settle](provider=posix): Executing check '/sbin/pcs status | grep -q 'partition with quorum' > /dev/null 2>&1'", > "Debug: Executing: '/sbin/pcs status | grep -q 'partition with quorum' > /dev/null 2>&1'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/unless: Error: cluster is not currently running on this node", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 1/360", > "Debug: Exec[wait-for-settle](provider=posix): Executing '/sbin/pcs status | grep -q 'partition with quorum' > /dev/null 2>&1'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Sleeping for 10.0 seconds between tries", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 2/360", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 3/360", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 4/360", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 5/360", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 6/360", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: executed successfully", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: Class[Pacemaker::Corosync]: The container Stage[main] will propagate my refresh event", > "Info: Class[Systemd::Systemctl::Daemon_reload]: Scheduling refresh of Exec[systemctl-daemon-reload]", > "Debug: Exec[systemctl-daemon-reload](provider=posix): Executing 'systemctl daemon-reload'", > "Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: The container Class[Systemd::Systemctl::Daemon_reload] will propagate my refresh event", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: The container Stage[main] will propagate my refresh event", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: The container Class[Systemd] will propagate my refresh event", > "Debug: Class[Systemd]: The container Stage[main] will propagate my refresh event", > "Info: Computing checksum on file /etc/ssh/sshd_config", > "Info: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]: Filebucketed /etc/ssh/sshd_config to puppet with sum 781dbef6518331ceaa1de16137f5328c", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}781dbef6518331ceaa1de16137f5328c' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Debug: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]: The container Concat[/etc/ssh/sshd_config] will propagate my refresh event", > "Debug: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]: The container /etc/ssh/sshd_config will propagate my refresh event", > "Debug: /etc/ssh/sshd_config: The container Concat[/etc/ssh/sshd_config] will propagate my refresh event", > "Debug: Concat[/etc/ssh/sshd_config]: The container Class[Ssh::Server::Config] will propagate my refresh event", > "Info: Concat[/etc/ssh/sshd_config]: Scheduling refresh of Service[sshd]", > "Debug: Class[Ssh::Server::Config]: The container Stage[main] will propagate my refresh event", > "Info: Class[Ssh::Server::Config]: Scheduling refresh of Class[Ssh::Server::Service]", > "Info: Class[Ssh::Server::Service]: Scheduling refresh of Service[sshd]", > "Debug: Executing: '/usr/bin/systemctl is-active sshd'", > "Debug: Executing: '/usr/bin/systemctl is-enabled sshd'", > "Debug: Executing: '/usr/bin/systemctl restart sshd'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Debug: /Stage[main]/Ssh::Server::Service/Service[sshd]: The container Class[Ssh::Server::Service] will propagate my refresh event", > "Debug: Class[Ssh::Server::Service]: The container Stage[main] will propagate my refresh event", > "Debug: Prefetching iptables resources for firewall", > "Debug: Puppet::Type::Firewall::ProviderIptables: [prefetch(resources)]", > "Debug: Puppet::Type::Firewall::ProviderIptables: [instances]", > "Debug: Executing: '/usr/sbin/iptables-save'", > "Debug: Firewall[000 accept related established rules ipv4](provider=iptables): Inserting rule 000 accept related established rules ipv4", > "Debug: Firewall[000 accept related established rules ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[000 accept related established rules ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 1 --wait -t filter -p all -m state --state ESTABLISHED,RELATED -j ACCEPT -m comment --comment 000 accept related established rules ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Debug: Firewall[000 accept related established rules ipv4](provider=iptables): [flush]", > "Debug: Firewall[000 accept related established rules ipv4](provider=iptables): [persist_iptables]", > "Debug: Executing: '/usr/libexec/iptables/iptables.init save'", > "Debug: /Firewall[000 accept related established rules ipv4]: The container Tripleo::Firewall::Rule[000 accept related established rules] will propagate my refresh event", > "Debug: Prefetching ip6tables resources for firewall", > "Debug: Puppet::Type::Firewall::ProviderIp6tables: [prefetch(resources)]", > "Debug: Puppet::Type::Firewall::ProviderIp6tables: [instances]", > "Debug: Executing: '/usr/sbin/ip6tables-save'", > "Debug: Firewall[000 accept related established rules ipv6](provider=ip6tables): Inserting rule 000 accept related established rules ipv6", > "Debug: Firewall[000 accept related established rules ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[000 accept related established rules ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 1 --wait -t filter -p all -m state --state ESTABLISHED,RELATED -j ACCEPT -m comment --comment 000 accept related established rules ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Debug: Firewall[000 accept related established rules ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[000 accept related established rules ipv6](provider=ip6tables): [persist_iptables]", > "Debug: Executing: '/usr/libexec/iptables/ip6tables.init save'", > "Debug: /Firewall[000 accept related established rules ipv6]: The container Tripleo::Firewall::Rule[000 accept related established rules] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[000 accept related established rules]: The container Class[Tripleo::Firewall::Pre] will propagate my refresh event", > "Debug: Firewall[001 accept all icmp ipv4](provider=iptables): Inserting rule 001 accept all icmp ipv4", > "Debug: Firewall[001 accept all icmp ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[001 accept all icmp ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 2 --wait -t filter -p icmp -m state --state NEW -j ACCEPT -m comment --comment 001 accept all icmp ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Debug: Firewall[001 accept all icmp ipv4](provider=iptables): [flush]", > "Debug: Firewall[001 accept all icmp ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: The container Tripleo::Firewall::Rule[001 accept all icmp] will propagate my refresh event", > "Debug: Firewall[001 accept all icmp ipv6](provider=ip6tables): Inserting rule 001 accept all icmp ipv6", > "Debug: Firewall[001 accept all icmp ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[001 accept all icmp ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 2 --wait -t filter -p ipv6-icmp -m state --state NEW -j ACCEPT -m comment --comment 001 accept all icmp ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Debug: Firewall[001 accept all icmp ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[001 accept all icmp ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: The container Tripleo::Firewall::Rule[001 accept all icmp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[001 accept all icmp]: The container Class[Tripleo::Firewall::Pre] will propagate my refresh event", > "Debug: Firewall[002 accept all to lo interface ipv4](provider=iptables): Inserting rule 002 accept all to lo interface ipv4", > "Debug: Firewall[002 accept all to lo interface ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[002 accept all to lo interface ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 3 --wait -t filter -i lo -p all -m state --state NEW -j ACCEPT -m comment --comment 002 accept all to lo interface ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Debug: Firewall[002 accept all to lo interface ipv4](provider=iptables): [flush]", > "Debug: Firewall[002 accept all to lo interface ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: The container Tripleo::Firewall::Rule[002 accept all to lo interface] will propagate my refresh event", > "Debug: Firewall[002 accept all to lo interface ipv6](provider=ip6tables): Inserting rule 002 accept all to lo interface ipv6", > "Debug: Firewall[002 accept all to lo interface ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[002 accept all to lo interface ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 3 --wait -t filter -i lo -p all -m state --state NEW -j ACCEPT -m comment --comment 002 accept all to lo interface ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Debug: Firewall[002 accept all to lo interface ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[002 accept all to lo interface ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: The container Tripleo::Firewall::Rule[002 accept all to lo interface] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[002 accept all to lo interface]: The container Class[Tripleo::Firewall::Pre] will propagate my refresh event", > "Debug: Firewall[003 accept ssh ipv4](provider=iptables): Inserting rule 003 accept ssh ipv4", > "Debug: Firewall[003 accept ssh ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[003 accept ssh ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 4 --wait -t filter -p tcp -m multiport --dports 22 -m state --state NEW -j ACCEPT -m comment --comment 003 accept ssh ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Debug: Firewall[003 accept ssh ipv4](provider=iptables): [flush]", > "Debug: Firewall[003 accept ssh ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: The container Tripleo::Firewall::Rule[003 accept ssh] will propagate my refresh event", > "Debug: Firewall[003 accept ssh ipv6](provider=ip6tables): Inserting rule 003 accept ssh ipv6", > "Debug: Firewall[003 accept ssh ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[003 accept ssh ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 4 --wait -t filter -p tcp -m multiport --dports 22 -m state --state NEW -j ACCEPT -m comment --comment 003 accept ssh ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Debug: Firewall[003 accept ssh ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[003 accept ssh ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: The container Tripleo::Firewall::Rule[003 accept ssh] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[003 accept ssh]: The container Class[Tripleo::Firewall::Pre] will propagate my refresh event", > "Debug: Firewall[004 accept ipv6 dhcpv6 ipv6](provider=ip6tables): Inserting rule 004 accept ipv6 dhcpv6 ipv6", > "Debug: Firewall[004 accept ipv6 dhcpv6 ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[004 accept ipv6 dhcpv6 ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 5 --wait -t filter -d fe80::/64 -p udp -m multiport --dports 546 -m state --state NEW -j ACCEPT -m comment --comment 004 accept ipv6 dhcpv6 ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Debug: Firewall[004 accept ipv6 dhcpv6 ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[004 accept ipv6 dhcpv6 ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: The container Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]: The container Class[Tripleo::Firewall::Pre] will propagate my refresh event", > "Debug: Class[Tripleo::Firewall::Pre]: The container Stage[main] will propagate my refresh event", > "Debug: Firewall[998 log all ipv4](provider=iptables): Inserting rule 998 log all ipv4", > "Debug: Firewall[998 log all ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[998 log all ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p all -m state --state NEW -j LOG -m comment --comment 998 log all ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Debug: Firewall[998 log all ipv4](provider=iptables): [flush]", > "Debug: Firewall[998 log all ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[998 log all ipv4]: The container Tripleo::Firewall::Rule[998 log all] will propagate my refresh event", > "Debug: Firewall[998 log all ipv6](provider=ip6tables): Inserting rule 998 log all ipv6", > "Debug: Firewall[998 log all ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[998 log all ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p all -m state --state NEW -j LOG -m comment --comment 998 log all ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Debug: Firewall[998 log all ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[998 log all ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[998 log all ipv6]: The container Tripleo::Firewall::Rule[998 log all] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[998 log all]: The container Class[Tripleo::Firewall::Post] will propagate my refresh event", > "Debug: Firewall[999 drop all ipv4](provider=iptables): Inserting rule 999 drop all ipv4", > "Debug: Firewall[999 drop all ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[999 drop all ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p all -m state --state NEW -j DROP -m comment --comment 999 drop all ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Debug: Firewall[999 drop all ipv4](provider=iptables): [flush]", > "Debug: Firewall[999 drop all ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[999 drop all ipv4]: The container Tripleo::Firewall::Rule[999 drop all] will propagate my refresh event", > "Debug: Firewall[999 drop all ipv6](provider=ip6tables): Inserting rule 999 drop all ipv6", > "Debug: Firewall[999 drop all ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[999 drop all ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p all -m state --state NEW -j DROP -m comment --comment 999 drop all ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Debug: Firewall[999 drop all ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[999 drop all ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[999 drop all ipv6]: The container Tripleo::Firewall::Rule[999 drop all] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[999 drop all]: The container Class[Tripleo::Firewall::Post] will propagate my refresh event", > "Debug: Class[Tripleo::Firewall::Post]: The container Stage[main] will propagate my refresh event", > "Debug: Firewall[128 aodh-api ipv4](provider=iptables): Inserting rule 128 aodh-api ipv4", > "Debug: Firewall[128 aodh-api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[128 aodh-api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 8042,13042 -m state --state NEW -j ACCEPT -m comment --comment 128 aodh-api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/ensure: created", > "Debug: Firewall[128 aodh-api ipv4](provider=iptables): [flush]", > "Debug: Firewall[128 aodh-api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: The container Tripleo::Firewall::Rule[128 aodh-api] will propagate my refresh event", > "Debug: Firewall[128 aodh-api ipv6](provider=ip6tables): Inserting rule 128 aodh-api ipv6", > "Debug: Firewall[128 aodh-api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[128 aodh-api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 8042,13042 -m state --state NEW -j ACCEPT -m comment --comment 128 aodh-api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/ensure: created", > "Debug: Firewall[128 aodh-api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[128 aodh-api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: The container Tripleo::Firewall::Rule[128 aodh-api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[128 aodh-api]: The container Tripleo::Firewall::Service_rules[aodh_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[aodh_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[113 ceph_mgr ipv4](provider=iptables): Inserting rule 113 ceph_mgr ipv4", > "Debug: Firewall[113 ceph_mgr ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[113 ceph_mgr ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 6800:7300 -m state --state NEW -j ACCEPT -m comment --comment 113 ceph_mgr ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/ensure: created", > "Debug: Firewall[113 ceph_mgr ipv4](provider=iptables): [flush]", > "Debug: Firewall[113 ceph_mgr ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: The container Tripleo::Firewall::Rule[113 ceph_mgr] will propagate my refresh event", > "Debug: Firewall[113 ceph_mgr ipv6](provider=ip6tables): Inserting rule 113 ceph_mgr ipv6", > "Debug: Firewall[113 ceph_mgr ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[113 ceph_mgr ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 6800:7300 -m state --state NEW -j ACCEPT -m comment --comment 113 ceph_mgr ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/ensure: created", > "Debug: Firewall[113 ceph_mgr ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[113 ceph_mgr ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: The container Tripleo::Firewall::Rule[113 ceph_mgr] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[113 ceph_mgr]: The container Tripleo::Firewall::Service_rules[ceph_mgr] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[ceph_mgr]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[110 ceph_mon ipv4](provider=iptables): Inserting rule 110 ceph_mon ipv4", > "Debug: Firewall[110 ceph_mon ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[110 ceph_mon ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 6789 -m state --state NEW -j ACCEPT -m comment --comment 110 ceph_mon ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/ensure: created", > "Debug: Firewall[110 ceph_mon ipv4](provider=iptables): [flush]", > "Debug: Firewall[110 ceph_mon ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: The container Tripleo::Firewall::Rule[110 ceph_mon] will propagate my refresh event", > "Debug: Firewall[110 ceph_mon ipv6](provider=ip6tables): Inserting rule 110 ceph_mon ipv6", > "Debug: Firewall[110 ceph_mon ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[110 ceph_mon ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 6789 -m state --state NEW -j ACCEPT -m comment --comment 110 ceph_mon ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/ensure: created", > "Debug: Firewall[110 ceph_mon ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[110 ceph_mon ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: The container Tripleo::Firewall::Rule[110 ceph_mon] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[110 ceph_mon]: The container Tripleo::Firewall::Service_rules[ceph_mon] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[ceph_mon]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[119 cinder ipv4](provider=iptables): Inserting rule 119 cinder ipv4", > "Debug: Firewall[119 cinder ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[119 cinder ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 8776,13776 -m state --state NEW -j ACCEPT -m comment --comment 119 cinder ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/ensure: created", > "Debug: Firewall[119 cinder ipv4](provider=iptables): [flush]", > "Debug: Firewall[119 cinder ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[119 cinder ipv4]: The container Tripleo::Firewall::Rule[119 cinder] will propagate my refresh event", > "Debug: Firewall[119 cinder ipv6](provider=ip6tables): Inserting rule 119 cinder ipv6", > "Debug: Firewall[119 cinder ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[119 cinder ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 8776,13776 -m state --state NEW -j ACCEPT -m comment --comment 119 cinder ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/ensure: created", > "Debug: Firewall[119 cinder ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[119 cinder ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[119 cinder ipv6]: The container Tripleo::Firewall::Rule[119 cinder] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[119 cinder]: The container Tripleo::Firewall::Service_rules[cinder_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[cinder_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[120 iscsi initiator ipv4](provider=iptables): Inserting rule 120 iscsi initiator ipv4", > "Debug: Firewall[120 iscsi initiator ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[120 iscsi initiator ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 3260 -m state --state NEW -j ACCEPT -m comment --comment 120 iscsi initiator ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/ensure: created", > "Debug: Firewall[120 iscsi initiator ipv4](provider=iptables): [flush]", > "Debug: Firewall[120 iscsi initiator ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: The container Tripleo::Firewall::Rule[120 iscsi initiator] will propagate my refresh event", > "Debug: Firewall[120 iscsi initiator ipv6](provider=ip6tables): Inserting rule 120 iscsi initiator ipv6", > "Debug: Firewall[120 iscsi initiator ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[120 iscsi initiator ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 3260 -m state --state NEW -j ACCEPT -m comment --comment 120 iscsi initiator ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/ensure: created", > "Debug: Firewall[120 iscsi initiator ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[120 iscsi initiator ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: The container Tripleo::Firewall::Rule[120 iscsi initiator] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[120 iscsi initiator]: The container Tripleo::Firewall::Service_rules[cinder_volume] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[cinder_volume]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[112 glance_api ipv4](provider=iptables): Inserting rule 112 glance_api ipv4", > "Debug: Firewall[112 glance_api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[112 glance_api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 9292,13292 -m state --state NEW -j ACCEPT -m comment --comment 112 glance_api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/ensure: created", > "Debug: Firewall[112 glance_api ipv4](provider=iptables): [flush]", > "Debug: Firewall[112 glance_api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[112 glance_api ipv4]: The container Tripleo::Firewall::Rule[112 glance_api] will propagate my refresh event", > "Debug: Firewall[112 glance_api ipv6](provider=ip6tables): Inserting rule 112 glance_api ipv6", > "Debug: Firewall[112 glance_api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[112 glance_api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 9292,13292 -m state --state NEW -j ACCEPT -m comment --comment 112 glance_api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/ensure: created", > "Debug: Firewall[112 glance_api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[112 glance_api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[112 glance_api ipv6]: The container Tripleo::Firewall::Rule[112 glance_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[112 glance_api]: The container Tripleo::Firewall::Service_rules[glance_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[glance_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[129 gnocchi-api ipv4](provider=iptables): Inserting rule 129 gnocchi-api ipv4", > "Debug: Firewall[129 gnocchi-api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[129 gnocchi-api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 8041,13041 -m state --state NEW -j ACCEPT -m comment --comment 129 gnocchi-api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/ensure: created", > "Debug: Firewall[129 gnocchi-api ipv4](provider=iptables): [flush]", > "Debug: Firewall[129 gnocchi-api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: The container Tripleo::Firewall::Rule[129 gnocchi-api] will propagate my refresh event", > "Debug: Firewall[129 gnocchi-api ipv6](provider=ip6tables): Inserting rule 129 gnocchi-api ipv6", > "Debug: Firewall[129 gnocchi-api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[129 gnocchi-api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8041,13041 -m state --state NEW -j ACCEPT -m comment --comment 129 gnocchi-api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/ensure: created", > "Debug: Firewall[129 gnocchi-api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[129 gnocchi-api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: The container Tripleo::Firewall::Rule[129 gnocchi-api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[129 gnocchi-api]: The container Tripleo::Firewall::Service_rules[gnocchi_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[140 gnocchi-statsd ipv4](provider=iptables): Inserting rule 140 gnocchi-statsd ipv4", > "Debug: Firewall[140 gnocchi-statsd ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[140 gnocchi-statsd ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p udp -m multiport --dports 8125 -m state --state NEW -j ACCEPT -m comment --comment 140 gnocchi-statsd ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/ensure: created", > "Debug: Firewall[140 gnocchi-statsd ipv4](provider=iptables): [flush]", > "Debug: Firewall[140 gnocchi-statsd ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: The container Tripleo::Firewall::Rule[140 gnocchi-statsd] will propagate my refresh event", > "Debug: Firewall[140 gnocchi-statsd ipv6](provider=ip6tables): Inserting rule 140 gnocchi-statsd ipv6", > "Debug: Firewall[140 gnocchi-statsd ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[140 gnocchi-statsd ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p udp -m multiport --dports 8125 -m state --state NEW -j ACCEPT -m comment --comment 140 gnocchi-statsd ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/ensure: created", > "Debug: Firewall[140 gnocchi-statsd ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[140 gnocchi-statsd ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: The container Tripleo::Firewall::Rule[140 gnocchi-statsd] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[140 gnocchi-statsd]: The container Tripleo::Firewall::Service_rules[gnocchi_statsd] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_statsd]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[107 haproxy stats ipv4](provider=iptables): Inserting rule 107 haproxy stats ipv4", > "Debug: Firewall[107 haproxy stats ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[107 haproxy stats ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 1993 -m state --state NEW -j ACCEPT -m comment --comment 107 haproxy stats ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/ensure: created", > "Debug: Firewall[107 haproxy stats ipv4](provider=iptables): [flush]", > "Debug: Firewall[107 haproxy stats ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: The container Tripleo::Firewall::Rule[107 haproxy stats] will propagate my refresh event", > "Debug: Firewall[107 haproxy stats ipv6](provider=ip6tables): Inserting rule 107 haproxy stats ipv6", > "Debug: Firewall[107 haproxy stats ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[107 haproxy stats ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 1993 -m state --state NEW -j ACCEPT -m comment --comment 107 haproxy stats ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/ensure: created", > "Debug: Firewall[107 haproxy stats ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[107 haproxy stats ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: The container Tripleo::Firewall::Rule[107 haproxy stats] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[107 haproxy stats]: The container Tripleo::Firewall::Service_rules[haproxy] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[haproxy]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[125 heat_api ipv4](provider=iptables): Inserting rule 125 heat_api ipv4", > "Debug: Firewall[125 heat_api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[125 heat_api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 8004,13004 -m state --state NEW -j ACCEPT -m comment --comment 125 heat_api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/ensure: created", > "Debug: Firewall[125 heat_api ipv4](provider=iptables): [flush]", > "Debug: Firewall[125 heat_api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[125 heat_api ipv4]: The container Tripleo::Firewall::Rule[125 heat_api] will propagate my refresh event", > "Debug: Firewall[125 heat_api ipv6](provider=ip6tables): Inserting rule 125 heat_api ipv6", > "Debug: Firewall[125 heat_api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[125 heat_api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8004,13004 -m state --state NEW -j ACCEPT -m comment --comment 125 heat_api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/ensure: created", > "Debug: Firewall[125 heat_api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[125 heat_api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[125 heat_api ipv6]: The container Tripleo::Firewall::Rule[125 heat_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[125 heat_api]: The container Tripleo::Firewall::Service_rules[heat_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[heat_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[125 heat_cfn ipv4](provider=iptables): Inserting rule 125 heat_cfn ipv4", > "Debug: Firewall[125 heat_cfn ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[125 heat_cfn ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8000,13800 -m state --state NEW -j ACCEPT -m comment --comment 125 heat_cfn ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/ensure: created", > "Debug: Firewall[125 heat_cfn ipv4](provider=iptables): [flush]", > "Debug: Firewall[125 heat_cfn ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: The container Tripleo::Firewall::Rule[125 heat_cfn] will propagate my refresh event", > "Debug: Firewall[125 heat_cfn ipv6](provider=ip6tables): Inserting rule 125 heat_cfn ipv6", > "Debug: Firewall[125 heat_cfn ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[125 heat_cfn ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 8000,13800 -m state --state NEW -j ACCEPT -m comment --comment 125 heat_cfn ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/ensure: created", > "Debug: Firewall[125 heat_cfn ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[125 heat_cfn ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: The container Tripleo::Firewall::Rule[125 heat_cfn] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[125 heat_cfn]: The container Tripleo::Firewall::Service_rules[heat_api_cfn] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cfn]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[127 horizon ipv4](provider=iptables): Inserting rule 127 horizon ipv4", > "Debug: Firewall[127 horizon ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[127 horizon ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 80,443 -m state --state NEW -j ACCEPT -m comment --comment 127 horizon ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/ensure: created", > "Debug: Firewall[127 horizon ipv4](provider=iptables): [flush]", > "Debug: Firewall[127 horizon ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[127 horizon ipv4]: The container Tripleo::Firewall::Rule[127 horizon] will propagate my refresh event", > "Debug: Firewall[127 horizon ipv6](provider=ip6tables): Inserting rule 127 horizon ipv6", > "Debug: Firewall[127 horizon ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[127 horizon ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 14 --wait -t filter -p tcp -m multiport --dports 80,443 -m state --state NEW -j ACCEPT -m comment --comment 127 horizon ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/ensure: created", > "Debug: Firewall[127 horizon ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[127 horizon ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[127 horizon ipv6]: The container Tripleo::Firewall::Rule[127 horizon] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[127 horizon]: The container Tripleo::Firewall::Service_rules[horizon] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[horizon]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[111 keystone ipv4](provider=iptables): Inserting rule 111 keystone ipv4", > "Debug: Firewall[111 keystone ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[111 keystone ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 5000,13000,35357 -m state --state NEW -j ACCEPT -m comment --comment 111 keystone ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/ensure: created", > "Debug: Firewall[111 keystone ipv4](provider=iptables): [flush]", > "Debug: Firewall[111 keystone ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[111 keystone ipv4]: The container Tripleo::Firewall::Rule[111 keystone] will propagate my refresh event", > "Debug: Firewall[111 keystone ipv6](provider=ip6tables): Inserting rule 111 keystone ipv6", > "Debug: Firewall[111 keystone ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[111 keystone ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 5000,13000,35357 -m state --state NEW -j ACCEPT -m comment --comment 111 keystone ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/ensure: created", > "Debug: Firewall[111 keystone ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[111 keystone ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[111 keystone ipv6]: The container Tripleo::Firewall::Rule[111 keystone] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[111 keystone]: The container Tripleo::Firewall::Service_rules[keystone] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[keystone]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[121 memcached ipv4](provider=iptables): Inserting rule 121 memcached ipv4", > "Debug: Firewall[121 memcached ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[121 memcached ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -s 172.17.1.0/24 -p tcp -m multiport --dports 11211 -m state --state NEW -j ACCEPT -m comment --comment 121 memcached ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/ensure: created", > "Debug: Firewall[121 memcached ipv4](provider=iptables): [flush]", > "Debug: Firewall[121 memcached ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[121 memcached ipv4]: The container Tripleo::Firewall::Rule[121 memcached] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[121 memcached]: The container Tripleo::Firewall::Service_rules[memcached] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[memcached]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[104 mysql galera-bundle ipv4](provider=iptables): Inserting rule 104 mysql galera-bundle ipv4", > "Debug: Firewall[104 mysql galera-bundle ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[104 mysql galera-bundle ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 873,3123,3306,4444,4567,4568,9200 -m state --state NEW -j ACCEPT -m comment --comment 104 mysql galera-bundle ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/ensure: created", > "Debug: Firewall[104 mysql galera-bundle ipv4](provider=iptables): [flush]", > "Debug: Firewall[104 mysql galera-bundle ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: The container Tripleo::Firewall::Rule[104 mysql galera-bundle] will propagate my refresh event", > "Debug: Firewall[104 mysql galera-bundle ipv6](provider=ip6tables): Inserting rule 104 mysql galera-bundle ipv6", > "Debug: Firewall[104 mysql galera-bundle ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[104 mysql galera-bundle ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 873,3123,3306,4444,4567,4568,9200 -m state --state NEW -j ACCEPT -m comment --comment 104 mysql galera-bundle ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/ensure: created", > "Debug: Firewall[104 mysql galera-bundle ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[104 mysql galera-bundle ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: The container Tripleo::Firewall::Rule[104 mysql galera-bundle] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[104 mysql galera-bundle]: The container Tripleo::Firewall::Service_rules[mysql] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[mysql]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[114 neutron api ipv4](provider=iptables): Inserting rule 114 neutron api ipv4", > "Debug: Firewall[114 neutron api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[114 neutron api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 9696,13696 -m state --state NEW -j ACCEPT -m comment --comment 114 neutron api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/ensure: created", > "Debug: Firewall[114 neutron api ipv4](provider=iptables): [flush]", > "Debug: Firewall[114 neutron api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[114 neutron api ipv4]: The container Tripleo::Firewall::Rule[114 neutron api] will propagate my refresh event", > "Debug: Firewall[114 neutron api ipv6](provider=ip6tables): Inserting rule 114 neutron api ipv6", > "Debug: Firewall[114 neutron api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[114 neutron api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 9696,13696 -m state --state NEW -j ACCEPT -m comment --comment 114 neutron api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/ensure: created", > "Debug: Firewall[114 neutron api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[114 neutron api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[114 neutron api ipv6]: The container Tripleo::Firewall::Rule[114 neutron api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[114 neutron api]: The container Tripleo::Firewall::Service_rules[neutron_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[neutron_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[115 neutron dhcp input ipv4](provider=iptables): Inserting rule 115 neutron dhcp input ipv4", > "Debug: Firewall[115 neutron dhcp input ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[115 neutron dhcp input ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p udp -m multiport --dports 67 -m state --state NEW -j ACCEPT -m comment --comment 115 neutron dhcp input ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/ensure: created", > "Debug: Firewall[115 neutron dhcp input ipv4](provider=iptables): [flush]", > "Debug: Firewall[115 neutron dhcp input ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: The container Tripleo::Firewall::Rule[115 neutron dhcp input] will propagate my refresh event", > "Debug: Firewall[115 neutron dhcp input ipv6](provider=ip6tables): Inserting rule 115 neutron dhcp input ipv6", > "Debug: Firewall[115 neutron dhcp input ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[115 neutron dhcp input ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p udp -m multiport --dports 67 -m state --state NEW -j ACCEPT -m comment --comment 115 neutron dhcp input ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/ensure: created", > "Debug: Firewall[115 neutron dhcp input ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[115 neutron dhcp input ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: The container Tripleo::Firewall::Rule[115 neutron dhcp input] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[115 neutron dhcp input]: The container Tripleo::Firewall::Service_rules[neutron_dhcp] will propagate my refresh event", > "Debug: Firewall[116 neutron dhcp output ipv4](provider=iptables): Inserting rule 116 neutron dhcp output ipv4", > "Debug: Firewall[116 neutron dhcp output ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[116 neutron dhcp output ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I OUTPUT 1 --wait -t filter -p udp -m multiport --dports 68 -m state --state NEW -j ACCEPT -m comment --comment 116 neutron dhcp output ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/ensure: created", > "Debug: Firewall[116 neutron dhcp output ipv4](provider=iptables): [flush]", > "Debug: Firewall[116 neutron dhcp output ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: The container Tripleo::Firewall::Rule[116 neutron dhcp output] will propagate my refresh event", > "Debug: Firewall[116 neutron dhcp output ipv6](provider=ip6tables): Inserting rule 116 neutron dhcp output ipv6", > "Debug: Firewall[116 neutron dhcp output ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[116 neutron dhcp output ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I OUTPUT 1 --wait -t filter -p udp -m multiport --dports 68 -m state --state NEW -j ACCEPT -m comment --comment 116 neutron dhcp output ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/ensure: created", > "Debug: Firewall[116 neutron dhcp output ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[116 neutron dhcp output ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: The container Tripleo::Firewall::Rule[116 neutron dhcp output] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[116 neutron dhcp output]: The container Tripleo::Firewall::Service_rules[neutron_dhcp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[neutron_dhcp]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[106 neutron_l3 vrrp ipv4](provider=iptables): Inserting rule 106 neutron_l3 vrrp ipv4", > "Debug: Firewall[106 neutron_l3 vrrp ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[106 neutron_l3 vrrp ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p vrrp -m state --state NEW -j ACCEPT -m comment --comment 106 neutron_l3 vrrp ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/ensure: created", > "Debug: Firewall[106 neutron_l3 vrrp ipv4](provider=iptables): [flush]", > "Debug: Firewall[106 neutron_l3 vrrp ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: The container Tripleo::Firewall::Rule[106 neutron_l3 vrrp] will propagate my refresh event", > "Debug: Firewall[106 neutron_l3 vrrp ipv6](provider=ip6tables): Inserting rule 106 neutron_l3 vrrp ipv6", > "Debug: Firewall[106 neutron_l3 vrrp ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[106 neutron_l3 vrrp ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p vrrp -m state --state NEW -j ACCEPT -m comment --comment 106 neutron_l3 vrrp ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/ensure: created", > "Debug: Firewall[106 neutron_l3 vrrp ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[106 neutron_l3 vrrp ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: The container Tripleo::Firewall::Rule[106 neutron_l3 vrrp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[106 neutron_l3 vrrp]: The container Tripleo::Firewall::Service_rules[neutron_l3] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[neutron_l3]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[118 neutron vxlan networks ipv4](provider=iptables): Inserting rule 118 neutron vxlan networks ipv4", > "Debug: Firewall[118 neutron vxlan networks ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[118 neutron vxlan networks ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 14 --wait -t filter -p udp -m multiport --dports 4789 -m state --state NEW -j ACCEPT -m comment --comment 118 neutron vxlan networks ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Debug: Firewall[118 neutron vxlan networks ipv4](provider=iptables): [flush]", > "Debug: Firewall[118 neutron vxlan networks ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: The container Tripleo::Firewall::Rule[118 neutron vxlan networks] will propagate my refresh event", > "Debug: Firewall[118 neutron vxlan networks ipv6](provider=ip6tables): Inserting rule 118 neutron vxlan networks ipv6", > "Debug: Firewall[118 neutron vxlan networks ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[118 neutron vxlan networks ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 15 --wait -t filter -p udp -m multiport --dports 4789 -m state --state NEW -j ACCEPT -m comment --comment 118 neutron vxlan networks ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Debug: Firewall[118 neutron vxlan networks ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[118 neutron vxlan networks ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: The container Tripleo::Firewall::Rule[118 neutron vxlan networks] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[118 neutron vxlan networks]: The container Tripleo::Firewall::Service_rules[neutron_ovs_agent] will propagate my refresh event", > "Debug: Firewall[136 neutron gre networks ipv4](provider=iptables): Inserting rule 136 neutron gre networks ipv4", > "Debug: Firewall[136 neutron gre networks ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[136 neutron gre networks ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 23 --wait -t filter -p gre -j ACCEPT -m comment --comment 136 neutron gre networks ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Debug: Firewall[136 neutron gre networks ipv4](provider=iptables): [flush]", > "Debug: Firewall[136 neutron gre networks ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: The container Tripleo::Firewall::Rule[136 neutron gre networks] will propagate my refresh event", > "Debug: Firewall[136 neutron gre networks ipv6](provider=ip6tables): Inserting rule 136 neutron gre networks ipv6", > "Debug: Firewall[136 neutron gre networks ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[136 neutron gre networks ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 23 --wait -t filter -p gre -j ACCEPT -m comment --comment 136 neutron gre networks ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Debug: Firewall[136 neutron gre networks ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[136 neutron gre networks ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: The container Tripleo::Firewall::Rule[136 neutron gre networks] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[136 neutron gre networks]: The container Tripleo::Firewall::Service_rules[neutron_ovs_agent] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[neutron_ovs_agent]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[113 nova_api ipv4](provider=iptables): Inserting rule 113 nova_api ipv4", > "Debug: Firewall[113 nova_api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[113 nova_api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8774,13774,8775 -m state --state NEW -j ACCEPT -m comment --comment 113 nova_api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/ensure: created", > "Debug: Firewall[113 nova_api ipv4](provider=iptables): [flush]", > "Debug: Firewall[113 nova_api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[113 nova_api ipv4]: The container Tripleo::Firewall::Rule[113 nova_api] will propagate my refresh event", > "Debug: Firewall[113 nova_api ipv6](provider=ip6tables): Inserting rule 113 nova_api ipv6", > "Debug: Firewall[113 nova_api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[113 nova_api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 8774,13774,8775 -m state --state NEW -j ACCEPT -m comment --comment 113 nova_api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/ensure: created", > "Debug: Firewall[113 nova_api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[113 nova_api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[113 nova_api ipv6]: The container Tripleo::Firewall::Rule[113 nova_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[113 nova_api]: The container Tripleo::Firewall::Service_rules[nova_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[nova_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[138 nova_placement ipv4](provider=iptables): Inserting rule 138 nova_placement ipv4", > "Debug: Firewall[138 nova_placement ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[138 nova_placement ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 8778,13778 -m state --state NEW -j ACCEPT -m comment --comment 138 nova_placement ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/ensure: created", > "Debug: Firewall[138 nova_placement ipv4](provider=iptables): [flush]", > "Debug: Firewall[138 nova_placement ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: The container Tripleo::Firewall::Rule[138 nova_placement] will propagate my refresh event", > "Debug: Firewall[138 nova_placement ipv6](provider=ip6tables): Inserting rule 138 nova_placement ipv6", > "Debug: Firewall[138 nova_placement ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[138 nova_placement ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 8778,13778 -m state --state NEW -j ACCEPT -m comment --comment 138 nova_placement ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/ensure: created", > "Debug: Firewall[138 nova_placement ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[138 nova_placement ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: The container Tripleo::Firewall::Rule[138 nova_placement] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[138 nova_placement]: The container Tripleo::Firewall::Service_rules[nova_placement] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[nova_placement]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[137 nova_vnc_proxy ipv4](provider=iptables): Inserting rule 137 nova_vnc_proxy ipv4", > "Debug: Firewall[137 nova_vnc_proxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[137 nova_vnc_proxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 6080,13080 -m state --state NEW -j ACCEPT -m comment --comment 137 nova_vnc_proxy ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/ensure: created", > "Debug: Firewall[137 nova_vnc_proxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[137 nova_vnc_proxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: The container Tripleo::Firewall::Rule[137 nova_vnc_proxy] will propagate my refresh event", > "Debug: Firewall[137 nova_vnc_proxy ipv6](provider=ip6tables): Inserting rule 137 nova_vnc_proxy ipv6", > "Debug: Firewall[137 nova_vnc_proxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[137 nova_vnc_proxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 6080,13080 -m state --state NEW -j ACCEPT -m comment --comment 137 nova_vnc_proxy ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/ensure: created", > "Debug: Firewall[137 nova_vnc_proxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[137 nova_vnc_proxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: The container Tripleo::Firewall::Rule[137 nova_vnc_proxy] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[137 nova_vnc_proxy]: The container Tripleo::Firewall::Service_rules[nova_vnc_proxy] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[nova_vnc_proxy]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[105 ntp ipv4](provider=iptables): Inserting rule 105 ntp ipv4", > "Debug: Firewall[105 ntp ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[105 ntp ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p udp -m multiport --dports 123 -m state --state NEW -j ACCEPT -m comment --comment 105 ntp ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Debug: Firewall[105 ntp ipv4](provider=iptables): [flush]", > "Debug: Firewall[105 ntp ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[105 ntp ipv4]: The container Tripleo::Firewall::Rule[105 ntp] will propagate my refresh event", > "Debug: Firewall[105 ntp ipv6](provider=ip6tables): Inserting rule 105 ntp ipv6", > "Debug: Firewall[105 ntp ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[105 ntp ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p udp -m multiport --dports 123 -m state --state NEW -j ACCEPT -m comment --comment 105 ntp ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Debug: Firewall[105 ntp ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[105 ntp ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[105 ntp ipv6]: The container Tripleo::Firewall::Rule[105 ntp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[105 ntp]: The container Tripleo::Firewall::Service_rules[ntp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[ntp]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[130 pacemaker tcp ipv4](provider=iptables): Inserting rule 130 pacemaker tcp ipv4", > "Debug: Firewall[130 pacemaker tcp ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[130 pacemaker tcp ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 2224,3121,21064 -m state --state NEW -j ACCEPT -m comment --comment 130 pacemaker tcp ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/ensure: created", > "Debug: Firewall[130 pacemaker tcp ipv4](provider=iptables): [flush]", > "Debug: Firewall[130 pacemaker tcp ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: The container Tripleo::Firewall::Rule[130 pacemaker tcp] will propagate my refresh event", > "Debug: Firewall[130 pacemaker tcp ipv6](provider=ip6tables): Inserting rule 130 pacemaker tcp ipv6", > "Debug: Firewall[130 pacemaker tcp ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[130 pacemaker tcp ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 2224,3121,21064 -m state --state NEW -j ACCEPT -m comment --comment 130 pacemaker tcp ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/ensure: created", > "Debug: Firewall[130 pacemaker tcp ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[130 pacemaker tcp ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: The container Tripleo::Firewall::Rule[130 pacemaker tcp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[130 pacemaker tcp]: The container Tripleo::Firewall::Service_rules[pacemaker] will propagate my refresh event", > "Debug: Firewall[131 pacemaker udp ipv4](provider=iptables): Inserting rule 131 pacemaker udp ipv4", > "Debug: Firewall[131 pacemaker udp ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[131 pacemaker udp ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 26 --wait -t filter -p udp -m multiport --dports 5405 -m state --state NEW -j ACCEPT -m comment --comment 131 pacemaker udp ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/ensure: created", > "Debug: Firewall[131 pacemaker udp ipv4](provider=iptables): [flush]", > "Debug: Firewall[131 pacemaker udp ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: The container Tripleo::Firewall::Rule[131 pacemaker udp] will propagate my refresh event", > "Debug: Firewall[131 pacemaker udp ipv6](provider=ip6tables): Inserting rule 131 pacemaker udp ipv6", > "Debug: Firewall[131 pacemaker udp ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[131 pacemaker udp ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 26 --wait -t filter -p udp -m multiport --dports 5405 -m state --state NEW -j ACCEPT -m comment --comment 131 pacemaker udp ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/ensure: created", > "Debug: Firewall[131 pacemaker udp ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[131 pacemaker udp ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: The container Tripleo::Firewall::Rule[131 pacemaker udp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[131 pacemaker udp]: The container Tripleo::Firewall::Service_rules[pacemaker] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[pacemaker]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[140 panko-api ipv4](provider=iptables): Inserting rule 140 panko-api ipv4", > "Debug: Firewall[140 panko-api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[140 panko-api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 31 --wait -t filter -p tcp -m multiport --dports 8977,13977 -m state --state NEW -j ACCEPT -m comment --comment 140 panko-api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/ensure: created", > "Debug: Firewall[140 panko-api ipv4](provider=iptables): [flush]", > "Debug: Firewall[140 panko-api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[140 panko-api ipv4]: The container Tripleo::Firewall::Rule[140 panko-api] will propagate my refresh event", > "Debug: Firewall[140 panko-api ipv6](provider=ip6tables): Inserting rule 140 panko-api ipv6", > "Debug: Firewall[140 panko-api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[140 panko-api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 31 --wait -t filter -p tcp -m multiport --dports 8977,13977 -m state --state NEW -j ACCEPT -m comment --comment 140 panko-api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/ensure: created", > "Debug: Firewall[140 panko-api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[140 panko-api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[140 panko-api ipv6]: The container Tripleo::Firewall::Rule[140 panko-api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[140 panko-api]: The container Tripleo::Firewall::Service_rules[panko_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[panko_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[109 rabbitmq-bundle ipv4](provider=iptables): Inserting rule 109 rabbitmq-bundle ipv4", > "Debug: Firewall[109 rabbitmq-bundle ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[109 rabbitmq-bundle ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 3122,4369,5672,25672 -m state --state NEW -j ACCEPT -m comment --comment 109 rabbitmq-bundle ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/ensure: created", > "Debug: Firewall[109 rabbitmq-bundle ipv4](provider=iptables): [flush]", > "Debug: Firewall[109 rabbitmq-bundle ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: The container Tripleo::Firewall::Rule[109 rabbitmq-bundle] will propagate my refresh event", > "Debug: Firewall[109 rabbitmq-bundle ipv6](provider=ip6tables): Inserting rule 109 rabbitmq-bundle ipv6", > "Debug: Firewall[109 rabbitmq-bundle ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[109 rabbitmq-bundle ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 10 --wait -t filter -p tcp -m multiport --dports 3122,4369,5672,25672 -m state --state NEW -j ACCEPT -m comment --comment 109 rabbitmq-bundle ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/ensure: created", > "Debug: Firewall[109 rabbitmq-bundle ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[109 rabbitmq-bundle ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: The container Tripleo::Firewall::Rule[109 rabbitmq-bundle] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[109 rabbitmq-bundle]: The container Tripleo::Firewall::Service_rules[rabbitmq] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[rabbitmq]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[108 redis-bundle ipv4](provider=iptables): Inserting rule 108 redis-bundle ipv4", > "Debug: Firewall[108 redis-bundle ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[108 redis-bundle ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 3124,6379,26379 -m state --state NEW -j ACCEPT -m comment --comment 108 redis-bundle ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/ensure: created", > "Debug: Firewall[108 redis-bundle ipv4](provider=iptables): [flush]", > "Debug: Firewall[108 redis-bundle ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: The container Tripleo::Firewall::Rule[108 redis-bundle] will propagate my refresh event", > "Debug: Firewall[108 redis-bundle ipv6](provider=ip6tables): Inserting rule 108 redis-bundle ipv6", > "Debug: Firewall[108 redis-bundle ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[108 redis-bundle ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 10 --wait -t filter -p tcp -m multiport --dports 3124,6379,26379 -m state --state NEW -j ACCEPT -m comment --comment 108 redis-bundle ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/ensure: created", > "Debug: Firewall[108 redis-bundle ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[108 redis-bundle ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: The container Tripleo::Firewall::Rule[108 redis-bundle] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[108 redis-bundle]: The container Tripleo::Firewall::Service_rules[redis] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[redis]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[122 swift proxy ipv4](provider=iptables): Inserting rule 122 swift proxy ipv4", > "Debug: Firewall[122 swift proxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[122 swift proxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 22 --wait -t filter -p tcp -m multiport --dports 8080,13808 -m state --state NEW -j ACCEPT -m comment --comment 122 swift proxy ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/ensure: created", > "Debug: Firewall[122 swift proxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[122 swift proxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: The container Tripleo::Firewall::Rule[122 swift proxy] will propagate my refresh event", > "Debug: Firewall[122 swift proxy ipv6](provider=ip6tables): Inserting rule 122 swift proxy ipv6", > "Debug: Firewall[122 swift proxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[122 swift proxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 22 --wait -t filter -p tcp -m multiport --dports 8080,13808 -m state --state NEW -j ACCEPT -m comment --comment 122 swift proxy ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/ensure: created", > "Debug: Firewall[122 swift proxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[122 swift proxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: The container Tripleo::Firewall::Rule[122 swift proxy] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[122 swift proxy]: The container Tripleo::Firewall::Service_rules[swift_proxy] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[swift_proxy]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[123 swift storage ipv4](provider=iptables): Inserting rule 123 swift storage ipv4", > "Debug: Firewall[123 swift storage ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[123 swift storage ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 23 --wait -t filter -p tcp -m multiport --dports 873,6000,6001,6002 -m state --state NEW -j ACCEPT -m comment --comment 123 swift storage ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/ensure: created", > "Debug: Firewall[123 swift storage ipv4](provider=iptables): [flush]", > "Debug: Firewall[123 swift storage ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[123 swift storage ipv4]: The container Tripleo::Firewall::Rule[123 swift storage] will propagate my refresh event", > "Debug: Firewall[123 swift storage ipv6](provider=ip6tables): Inserting rule 123 swift storage ipv6", > "Debug: Firewall[123 swift storage ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[123 swift storage ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 23 --wait -t filter -p tcp -m multiport --dports 873,6000,6001,6002 -m state --state NEW -j ACCEPT -m comment --comment 123 swift storage ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/ensure: created", > "Debug: Firewall[123 swift storage ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[123 swift storage ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[123 swift storage ipv6]: The container Tripleo::Firewall::Rule[123 swift storage] will propagate my refresh event", > "Debug: Class[Firewall::Linux::Redhat]: The container Stage[main] will propagate my refresh event", > "Debug: Exec[nonpersistent_v4_rules_cleanup](provider=posix): Executing check '/bin/test -f /etc/sysconfig/iptables && /bin/grep -q neutron- /etc/sysconfig/iptables'", > "Debug: Executing: '/bin/test -f /etc/sysconfig/iptables && /bin/grep -q neutron- /etc/sysconfig/iptables'", > "Debug: Exec[nonpersistent_v6_rules_cleanup](provider=posix): Executing check '/bin/test -f /etc/sysconfig/ip6tables && /bin/grep -q neutron- /etc/sysconfig/ip6tables'", > "Debug: Executing: '/bin/test -f /etc/sysconfig/ip6tables && /bin/grep -q neutron- /etc/sysconfig/ip6tables'", > "Debug: Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup](provider=posix): Executing check '/bin/test -f /etc/sysconfig/iptables'", > "Debug: Executing: '/bin/test -f /etc/sysconfig/iptables'", > "Debug: Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup](provider=posix): Executing check '/bin/grep -v \"\\-m comment \\--comment\" /etc/sysconfig/iptables | /bin/grep -q ironic-inspector'", > "Debug: Executing: '/bin/grep -v \"\\-m comment \\--comment\" /etc/sysconfig/iptables | /bin/grep -q ironic-inspector'", > "Debug: Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup](provider=posix): Executing check '/bin/test -f /etc/sysconfig/ip6tables'", > "Debug: Executing: '/bin/test -f /etc/sysconfig/ip6tables'", > "Debug: Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup](provider=posix): Executing check '/bin/grep -v \"\\-m comment \\--comment\" /etc/sysconfig/ip6tables | /bin/grep -q ironic-inspector'", > "Debug: Executing: '/bin/grep -v \"\\-m comment \\--comment\" /etc/sysconfig/ip6tables | /bin/grep -q ironic-inspector'", > "Debug: Tripleo::Firewall::Rule[123 swift storage]: The container Tripleo::Firewall::Service_rules[swift_storage] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[swift_storage]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Class[Tripleo::Firewall]: The container Stage[main] will propagate my refresh event", > "Debug: Finishing transaction 37806900", > "Debug: Storing state", > "Info: Creating state file /var/lib/puppet/state/state.yaml", > "Debug: Stored state in 0.02 seconds", > "Notice: Applied catalog in 88.53 seconds", > "Changes:", > " Total: 166", > "Events:", > " Success: 166", > "Resources:", > " Changed: 165", > " Out of sync: 165", > " Total: 212", > " Restarted: 4", > "Time:", > " Filebucket: 0.00", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " File line: 0.00", > " Package manifest: 0.00", > " Group: 0.02", > " User: 0.05", > " Sysctl: 0.16", > " File: 0.20", > " Sysctl runtime: 0.22", > " Augeas: 0.41", > " Package: 0.43", > " Firewall: 15.38", > " Last run: 1534432960", > " Service: 3.93", > " Config retrieval: 5.10", > " Exec: 52.53", > " Total: 78.44", > "Version:", > " Config: 1534432867", > " Puppet: 4.8.2", > "Debug: Applying settings catalog for sections reporting, metrics", > "Debug: Finishing transaction 63691580", > "Debug: Received report to process from controller-1.localdomain", > "Debug: Processing report from controller-1.localdomain with processor Puppet::Reports::Store", > "erlexec: HOME must be set", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ip_address instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp\", 56]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 35]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ssh/manifests/server.pp\", 12]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 42]" > ] > } > > TASK [Run docker-puppet tasks (generate config) during step 1] ***************** > ok: [localhost] > > TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 1] *** > fatal: [localhost]: FAILED! => { > "failed_when_result": true, > "outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": [ > "2018-08-16 15:22:45,468 INFO: 23096 -- Running docker-puppet", > "2018-08-16 15:22:45,469 INFO: 23096 -- Service compilation completed.", > "2018-08-16 15:22:45,470 INFO: 23096 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-08-16 15:22:45,481 INFO: 23097 -- Starting configuration of nova_placement using image 192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-08-14.4", > "2018-08-16 15:22:45,481 INFO: 23098 -- Starting configuration of heat_api using image 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4", > "2018-08-16 15:22:45,481 INFO: 23099 -- Starting configuration of mysql using image 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4", > "2018-08-16 15:22:45,483 INFO: 23097 -- Removing container: docker-puppet-nova_placement", > "2018-08-16 15:22:45,483 INFO: 23098 -- Removing container: docker-puppet-heat_api", > "2018-08-16 15:22:45,483 INFO: 23099 -- Removing container: docker-puppet-mysql", > "2018-08-16 15:22:45,524 INFO: 23097 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-08-14.4", > "2018-08-16 15:22:45,524 INFO: 23098 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4", > "2018-08-16 15:22:45,527 INFO: 23099 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4", > "2018-08-16 15:23:12,316 INFO: 23099 -- Removing container: docker-puppet-mysql", > "2018-08-16 15:23:12,351 INFO: 23099 -- Finished processing puppet configs for mysql", > "2018-08-16 15:23:12,352 INFO: 23099 -- Starting configuration of gnocchi using image 192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4", > "2018-08-16 15:23:12,352 INFO: 23099 -- Removing container: docker-puppet-gnocchi", > "2018-08-16 15:23:12,375 INFO: 23099 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4", > "2018-08-16 15:23:19,633 INFO: 23098 -- Removing container: docker-puppet-heat_api", > "2018-08-16 15:23:19,697 INFO: 23098 -- Finished processing puppet configs for heat_api", > "2018-08-16 15:23:19,698 INFO: 23098 -- Starting configuration of swift_ringbuilder using image 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4", > "2018-08-16 15:23:19,698 INFO: 23098 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-08-16 15:23:19,726 INFO: 23098 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4", > "2018-08-16 15:23:21,481 INFO: 23097 -- Removing container: docker-puppet-nova_placement", > "2018-08-16 15:23:21,534 INFO: 23097 -- Finished processing puppet configs for nova_placement", > "2018-08-16 15:23:21,535 INFO: 23097 -- Starting configuration of aodh using image 192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-08-14.4", > "2018-08-16 15:23:21,535 INFO: 23097 -- Removing container: docker-puppet-aodh", > "2018-08-16 15:23:21,570 INFO: 23097 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-08-14.4", > "2018-08-16 15:23:34,664 INFO: 23099 -- Removing container: docker-puppet-gnocchi", > "2018-08-16 15:23:34,704 INFO: 23099 -- Finished processing puppet configs for gnocchi", > "2018-08-16 15:23:34,705 INFO: 23099 -- Starting configuration of clustercheck using image 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4", > "2018-08-16 15:23:34,705 INFO: 23099 -- Removing container: docker-puppet-clustercheck", > "2018-08-16 15:23:34,727 INFO: 23099 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4", > "2018-08-16 15:23:38,513 INFO: 23097 -- Removing container: docker-puppet-aodh", > "2018-08-16 15:23:38,563 INFO: 23097 -- Finished processing puppet configs for aodh", > "2018-08-16 15:23:38,564 INFO: 23097 -- Starting configuration of nova using image 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4", > "2018-08-16 15:23:38,564 INFO: 23097 -- Removing container: docker-puppet-nova", > "2018-08-16 15:23:38,593 INFO: 23097 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4", > "2018-08-16 15:23:38,613 INFO: 23098 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-08-16 15:23:38,673 INFO: 23098 -- Finished processing puppet configs for swift_ringbuilder", > "2018-08-16 15:23:38,674 INFO: 23098 -- Starting configuration of glance_api using image 192.168.24.1:8787/rhosp13/openstack-glance-api:2018-08-14.4", > "2018-08-16 15:23:38,674 INFO: 23098 -- Removing container: docker-puppet-glance_api", > "2018-08-16 15:23:38,704 INFO: 23098 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-glance-api:2018-08-14.4", > "2018-08-16 15:23:42,553 INFO: 23099 -- Removing container: docker-puppet-clustercheck", > "2018-08-16 15:23:42,605 INFO: 23099 -- Finished processing puppet configs for clustercheck", > "2018-08-16 15:23:42,606 INFO: 23099 -- Starting configuration of redis using image 192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4", > "2018-08-16 15:23:42,606 INFO: 23099 -- Removing container: docker-puppet-redis", > "2018-08-16 15:23:42,639 INFO: 23099 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4", > "2018-08-16 15:23:54,136 INFO: 23099 -- Removing container: docker-puppet-redis", > "2018-08-16 15:23:54,169 INFO: 23099 -- Finished processing puppet configs for redis", > "2018-08-16 15:23:54,170 INFO: 23099 -- Starting configuration of memcached using image 192.168.24.1:8787/rhosp13/openstack-memcached:2018-08-14.4", > "2018-08-16 15:23:54,170 INFO: 23099 -- Removing container: docker-puppet-memcached", > "2018-08-16 15:23:54,196 INFO: 23099 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-memcached:2018-08-14.4", > "2018-08-16 15:23:57,802 INFO: 23097 -- Removing container: docker-puppet-nova", > "2018-08-16 15:23:57,858 INFO: 23097 -- Finished processing puppet configs for nova", > "2018-08-16 15:23:57,859 INFO: 23097 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp13/openstack-iscsid:2018-08-14.4", > "2018-08-16 15:23:57,859 INFO: 23097 -- Removing container: docker-puppet-iscsid", > "2018-08-16 15:23:57,888 INFO: 23097 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-iscsid:2018-08-14.4", > "2018-08-16 15:23:59,178 INFO: 23098 -- Removing container: docker-puppet-glance_api", > "2018-08-16 15:23:59,220 INFO: 23098 -- Finished processing puppet configs for glance_api", > "2018-08-16 15:23:59,220 INFO: 23098 -- Starting configuration of keystone using image 192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4", > "2018-08-16 15:23:59,221 INFO: 23098 -- Removing container: docker-puppet-keystone", > "2018-08-16 15:23:59,246 INFO: 23098 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4", > "2018-08-16 15:24:04,046 INFO: 23099 -- Removing container: docker-puppet-memcached", > "2018-08-16 15:24:04,097 INFO: 23099 -- Finished processing puppet configs for memcached", > "2018-08-16 15:24:04,097 INFO: 23099 -- Starting configuration of panko using image 192.168.24.1:8787/rhosp13/openstack-panko-api:2018-08-14.4", > "2018-08-16 15:24:04,098 INFO: 23099 -- Removing container: docker-puppet-panko", > "2018-08-16 15:24:04,125 INFO: 23099 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-panko-api:2018-08-14.4", > "2018-08-16 15:24:05,835 INFO: 23097 -- Removing container: docker-puppet-iscsid", > "2018-08-16 15:24:05,883 INFO: 23097 -- Finished processing puppet configs for iscsid", > "2018-08-16 15:24:05,883 INFO: 23097 -- Starting configuration of heat using image 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4", > "2018-08-16 15:24:05,884 INFO: 23097 -- Removing container: docker-puppet-heat", > "2018-08-16 15:24:05,914 INFO: 23097 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4", > "2018-08-16 15:24:16,836 INFO: 23098 -- Removing container: docker-puppet-keystone", > "2018-08-16 15:24:16,897 INFO: 23098 -- Finished processing puppet configs for keystone", > "2018-08-16 15:24:16,897 INFO: 23098 -- Starting configuration of swift using image 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4", > "2018-08-16 15:24:16,898 INFO: 23098 -- Removing container: docker-puppet-swift", > "2018-08-16 15:24:16,929 INFO: 23098 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4", > "2018-08-16 15:24:17,949 INFO: 23097 -- Removing container: docker-puppet-heat", > "2018-08-16 15:24:17,991 INFO: 23097 -- Finished processing puppet configs for heat", > "2018-08-16 15:24:17,992 INFO: 23097 -- Starting configuration of cinder using image 192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4", > "2018-08-16 15:24:17,992 INFO: 23097 -- Removing container: docker-puppet-cinder", > "2018-08-16 15:24:18,019 INFO: 23097 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4", > "2018-08-16 15:24:20,206 INFO: 23099 -- Removing container: docker-puppet-panko", > "2018-08-16 15:24:20,264 INFO: 23099 -- Finished processing puppet configs for panko", > "2018-08-16 15:24:20,264 INFO: 23099 -- Starting configuration of haproxy using image 192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4", > "2018-08-16 15:24:20,265 INFO: 23099 -- Removing container: docker-puppet-haproxy", > "2018-08-16 15:24:20,296 INFO: 23099 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4", > "2018-08-16 15:24:28,349 INFO: 23098 -- Removing container: docker-puppet-swift", > "2018-08-16 15:24:28,395 INFO: 23098 -- Finished processing puppet configs for swift", > "2018-08-16 15:24:28,396 INFO: 23098 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp13/openstack-cron:2018-08-14.4", > "2018-08-16 15:24:28,396 INFO: 23098 -- Removing container: docker-puppet-crond", > "2018-08-16 15:24:28,421 INFO: 23098 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-cron:2018-08-14.4", > "2018-08-16 15:24:32,451 ERROR: 23099 -- Failed running docker-puppet.py for haproxy", > "2018-08-16 15:24:32,452 ERROR: 23099 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "", > "2018-08-16 15:24:32,452 ERROR: 23099 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron,haproxy_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,haproxy_config'", > "+ origin_of_time=/var/lib/config-data/haproxy.origin_of_time", > "+ touch /var/lib/config-data/haproxy.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=controller-1", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,haproxy_config /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: Unknown variable: 'haproxy_member_options_real'. at /etc/puppet/modules/tripleo/manifests/haproxy.pp:1082:34", > "Error: Evaluation Error: Error while evaluating a Function Call, union(): Every parameter must be an array at /etc/puppet/modules/tripleo/manifests/haproxy.pp:1082:28 on node controller-1.localdomain", > "+ rc=1", > "+ set -e", > "+ '[' 1 -ne 2 -a 1 -ne 0 ']'", > "+ exit 1", > "2018-08-16 15:24:32,452 INFO: 23099 -- Finished processing puppet configs for haproxy", > "2018-08-16 15:24:32,452 INFO: 23099 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-08-14.4", > "2018-08-16 15:24:32,453 INFO: 23099 -- Removing container: docker-puppet-ceilometer", > "2018-08-16 15:24:32,480 INFO: 23099 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-08-14.4", > "2018-08-16 15:24:36,002 INFO: 23098 -- Removing container: docker-puppet-crond", > "2018-08-16 15:24:36,041 INFO: 23098 -- Finished processing puppet configs for crond", > "2018-08-16 15:24:36,041 INFO: 23098 -- Starting configuration of rabbitmq using image 192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4", > "2018-08-16 15:24:36,042 INFO: 23098 -- Removing container: docker-puppet-rabbitmq", > "2018-08-16 15:24:36,067 INFO: 23098 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4", > "2018-08-16 15:24:45,430 INFO: 23099 -- Removing container: docker-puppet-ceilometer", > "2018-08-16 15:24:45,473 INFO: 23099 -- Finished processing puppet configs for ceilometer", > "2018-08-16 15:24:45,473 INFO: 23099 -- Starting configuration of horizon using image 192.168.24.1:8787/rhosp13/openstack-horizon:2018-08-14.4", > "2018-08-16 15:24:45,474 INFO: 23099 -- Removing container: docker-puppet-horizon", > "2018-08-16 15:24:45,499 INFO: 23099 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-horizon:2018-08-14.4", > "2018-08-16 15:24:46,778 INFO: 23097 -- Removing container: docker-puppet-cinder", > "2018-08-16 15:24:46,840 INFO: 23097 -- Finished processing puppet configs for cinder", > "2018-08-16 15:24:54,401 INFO: 23098 -- Removing container: docker-puppet-rabbitmq", > "2018-08-16 15:24:54,447 INFO: 23098 -- Finished processing puppet configs for rabbitmq", > "2018-08-16 15:24:54,448 INFO: 23098 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4", > "2018-08-16 15:24:54,450 INFO: 23098 -- Removing container: docker-puppet-neutron", > "2018-08-16 15:24:54,475 INFO: 23098 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4", > "2018-08-16 15:25:04,065 INFO: 23099 -- Removing container: docker-puppet-horizon", > "2018-08-16 15:25:04,114 INFO: 23099 -- Finished processing puppet configs for horizon", > "2018-08-16 15:25:04,114 INFO: 23099 -- Starting configuration of heat_api_cfn using image 192.168.24.1:8787/rhosp13/openstack-heat-api-cfn:2018-08-14.4", > "2018-08-16 15:25:04,116 INFO: 23099 -- Removing container: docker-puppet-heat_api_cfn", > "2018-08-16 15:25:04,146 INFO: 23099 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-heat-api-cfn:2018-08-14.4", > "2018-08-16 15:25:13,732 INFO: 23098 -- Removing container: docker-puppet-neutron", > "2018-08-16 15:25:13,782 INFO: 23098 -- Finished processing puppet configs for neutron", > "2018-08-16 15:25:19,808 INFO: 23099 -- Removing container: docker-puppet-heat_api_cfn", > "2018-08-16 15:25:19,884 INFO: 23099 -- Finished processing puppet configs for heat_api_cfn", > "2018-08-16 15:25:19,885 ERROR: 23096 -- ERROR configuring haproxy" > ] > } > to retry, use: --limit @/var/lib/heat-config/heat-config-ansible/3343110c-aaa9-433d-99f3-aba5425446e2_playbook.retry > > PLAY RECAP ********************************************************************* > localhost : ok=25 changed=12 unreachable=0 failed=1 > > deploy_stderr: | > >overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.0: > resource_type: OS::Heat::StructuredDeployment > physical_resource_id: 518c2543-aaea-4e7d-8a4f-cb5224645228 > status: CREATE_FAILED > status_reason: | > Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 2 > deploy_stdout: | > > PLAY [localhost] *************************************************************** > > TASK [Gathering Facts] ********************************************************* > ok: [localhost] > > TASK [Create /var/lib/tripleo-config directory] ******************************** > changed: [localhost] > > TASK [Check if puppet step_config.pp manifest exists] ************************** > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Write the puppet step_config manifest] *********************************** > changed: [localhost] > > TASK [Create /var/lib/docker-puppet] ******************************************* > changed: [localhost] > > TASK [Check if docker-puppet puppet_config.yaml configuration file exists] ***** > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Write docker-puppet.json file] ******************************************* > changed: [localhost] > > TASK [Create /var/lib/docker-config-scripts] *********************************** > changed: [localhost] > > TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** > ok: [localhost] > > TASK [Check if docker_config_scripts.yaml file exists] ************************* > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Write docker config scripts] ********************************************* > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', u'mode': u'0700'}, 'key': u'nova_api_discover_hosts.sh'}) > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', u'mode': u'0700'}, 'key': u'create_swift_secret.sh'}) > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', u'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', u'mode': u'0700'}, 'key': u'set_swift_keymaster_key_id.sh'}) > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', u'mode': u'0700'}, 'key': u'docker_puppet_apply.sh'}) > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', u'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) > > TASK [Set docker_config_default fact] ****************************************** > ok: [localhost] => (item=None) > ok: [localhost] => (item=None) > ok: [localhost] => (item=None) > ok: [localhost] => (item=None) > ok: [localhost] => (item=None) > ok: [localhost] => (item=None) > ok: [localhost] > > TASK [Check if docker_config.yaml file exists] ********************************* > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Set docker_startup_configs_with_default fact] **************************** > ok: [localhost] > > TASK [Write docker-container-startup-configs] ********************************** > changed: [localhost] > > TASK [Write per-step docker-container-startup-configs] ************************* > changed: [localhost] => (item={'value': {u'cinder_volume_image_tag': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-08-14.4' '192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], u'net': u'host', u'detach': False}, u'mysql_image_tag': {u'start_order': 2, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4' '192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], u'net': u'host', u'detach': False}, u'mysql_data_ownership': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], u'user': u'root', u'volumes': [u'/var/lib/mysql:/var/lib/mysql'], u'net': u'host', u'detach': False}, u'redis_image_tag': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4' '192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], u'net': u'host', u'detach': False}, u'mysql_bootstrap': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=wQHWYDMtN2zP34A7ppnf36KgZ', u'DB_ROOT_PASSWORD=nqmpfBXNCf'], u'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], u'net': u'host', u'detach': False}, u'haproxy_image_tag': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4' '192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], u'net': u'host', u'detach': False}, u'rabbitmq_image_tag': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4' '192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], u'net': u'host', u'detach': False}, u'rabbitmq_bootstrap': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=2vn7bpVGQM3wmDdKDet3'], u'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], u'net': u'host', u'privileged': False}, u'memcached': {u'start_order': 0, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-memcached:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}}, 'key': u'step_1'}) > changed: [localhost] => (item={'value': {u'nova_placement': {u'start_order': 1, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'restart': u'always'}, u'swift_rsync_fix': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'sed -i "/pid file/d" /var/lib/kolla/config_files/src/etc/rsyncd.conf'], u'user': u'root', u'volumes': [u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw'], u'net': u'host', u'detach': False}, u'nova_db_sync': {u'start_order': 3, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], u'net': u'host', u'detach': False}, u'heat_engine_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-engine:2018-08-14.4', u'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], u'net': u'host', u'detach': False, u'privileged': False}, u'swift_copy_rings': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4', u'detach': False, u'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], u'user': u'root', u'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, u'nova_api_ensure_default_cell': {u'start_order': 2, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], u'net': u'host', u'detach': False}, u'keystone_cron': {u'start_order': 4, u'image': u'192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'panko_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-panko-api:2018-08-14.4', u'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], u'net': u'host', u'detach': False, u'privileged': False}, u'nova_api_db_sync': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], u'net': u'host', u'detach': False}, u'iscsid': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-iscsid:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'keystone_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4', u'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'detach': False, u'privileged': False}, u'ceilometer_init_log': {u'start_order': 0, u'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], u'image': u'192.168.24.1:8787/rhosp13/openstack-ceilometer-notification:2018-08-14.4', u'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], u'user': u'root'}, u'keystone': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'aodh_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-08-14.4', u'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], u'net': u'host', u'detach': False, u'privileged': False}, u'cinder_volume_init_logs': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], u'user': u'root', u'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], u'privileged': False}, u'neutron_ovs_bridge': {u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], u'net': u'host', u'detach': False, u'privileged': True}, u'cinder_api_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4', u'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], u'net': u'host', u'detach': False, u'privileged': False}, u'nova_api_map_cell0': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], u'net': u'host', u'detach': False}, u'glance_api_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-glance-api:2018-08-14.4', u'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], u'net': u'host', u'detach': False, u'privileged': False}, u'neutron_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4', u'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], u'net': u'host', u'detach': False, u'privileged': False}, u'keystone_bootstrap': {u'action': u'exec', u'start_order': 3, u'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'XjxMBFahCQcXFECTsWUkKHBKA'], u'user': u'root'}, u'horizon': {u'image': u'192.168.24.1:8787/rhosp13/openstack-horizon:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_setup_srv': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-account:2018-08-14.4', u'command': [u'chown', u'-R', u'swift:', u'/srv/node'], u'user': u'root', u'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) > changed: [localhost] => (item={'value': {u'gnocchi_init_log': {u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], u'user': u'root', u'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, u'mysql_init_bundle': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], u'net': u'host', u'detach': False}, u'gnocchi_init_lib': {u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], u'user': u'root', u'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, u'cinder_api_init_logs': {u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], u'privileged': False, u'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], u'user': u'root'}, u'create_dnsmasq_wrapper': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-dhcp-agent:2018-08-14.4', u'pid': u'host', u'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], u'net': u'host', u'detach': False}, u'panko_init_log': {u'image': u'192.168.24.1:8787/rhosp13/openstack-panko-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], u'user': u'root', u'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, u'redis_init_bundle': {u'start_order': 2, u'image': u'192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'config_volume': u'redis_init_bundle', u'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], u'net': u'host', u'detach': False}, u'cinder_scheduler_init_logs': {u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-scheduler:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], u'privileged': False, u'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], u'user': u'root'}, u'glance_init_logs': {u'image': u'192.168.24.1:8787/rhosp13/openstack-glance-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], u'privileged': False, u'volumes': [u'/var/log/containers/glance:/var/log/glance'], u'user': u'root'}, u'clustercheck': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], u'net': u'host', u'restart': u'always'}, u'haproxy_init_bundle': {u'start_order': 3, u'image': u'192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], u'net': u'host', u'detach': False, u'privileged': True}, u'neutron_init_logs': {u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], u'privileged': False, u'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], u'user': u'root'}, u'mysql_restart_bundle': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'config_volume': u'mysql', u'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'detach': False}, u'rabbitmq_init_bundle': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], u'net': u'host', u'detach': False}, u'nova_api_init_logs': {u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], u'privileged': False, u'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], u'user': u'root'}, u'haproxy_restart_bundle': {u'start_order': 2, u'image': u'192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4', u'config_volume': u'haproxy', u'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'detach': False}, u'create_keepalived_wrapper': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-l3-agent:2018-08-14.4', u'pid': u'host', u'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], u'net': u'host', u'detach': False}, u'rabbitmq_restart_bundle': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4', u'config_volume': u'rabbitmq', u'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'detach': False}, u'horizon_fix_perms': {u'image': u'192.168.24.1:8787/rhosp13/openstack-horizon:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], u'user': u'root', u'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, u'aodh_init_log': {u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], u'user': u'root', u'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, u'nova_metadata_init_log': {u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], u'privileged': False, u'volumes': [u'/var/log/containers/nova:/var/log/nova'], u'user': u'root'}, u'redis_restart_bundle': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4', u'config_volume': u'redis', u'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'detach': False}, u'heat_init_log': {u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-engine:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], u'user': u'root', u'volumes': [u'/var/log/containers/heat:/var/log/heat']}, u'nova_placement_init_log': {u'start_order': 1, u'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-08-14.4', u'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], u'user': u'root'}, u'keystone_init_log': {u'start_order': 1, u'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], u'image': u'192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4', u'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], u'user': u'root'}}, 'key': u'step_2'}) > changed: [localhost] => (item={'value': {u'cinder_volume_init_bundle': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], u'net': u'host', u'detach': False}, u'gnocchi_api': {u'start_order': 1, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'gnocchi_statsd': {u'start_order': 1, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-statsd:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'gnocchi_metricd': {u'start_order': 1, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-metricd:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_api_discover_hosts': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], u'net': u'host', u'detach': False}, u'ceilometer_gnocchi_upgrade': {u'start_order': 99, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-08-14.4', u'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], u'net': u'host', u'detach': False, u'privileged': False}, u'cinder_volume_restart_bundle': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-08-14.4', u'config_volume': u'cinder', u'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'detach': False}, u'gnocchi_db_sync': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], u'net': u'host', u'detach': False, u'privileged': False}}, 'key': u'step_5'}) > changed: [localhost] => (item={'value': {u'swift_container_updater': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-container:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'aodh_evaluator': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-evaluator:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_scheduler': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-scheduler:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_object_server': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'cinder_api': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_proxy': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], u'net': u'host', u'restart': u'always'}, u'neutron_dhcp': {u'start_order': 10, u'ulimit': [u'nofile=1024'], u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-dhcp-agent:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'heat_api': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_object_auditor': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'neutron_metadata_agent': {u'start_order': 10, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-metadata-agent:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'ceilometer_agent_central': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'keystone_refresh': {u'action': u'exec', u'start_order': 1, u'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], u'user': u'root'}, u'swift_account_replicator': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-account:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'aodh_notifier': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-notifier:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_api_cron': {u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_consoleauth': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-consoleauth:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'glance_api': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-glance-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_account_reaper': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-account:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'ceilometer_agent_notification': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-ceilometer-notification:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_vnc_proxy': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-novncproxy:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_rsync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'nova_api': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'aodh_api': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_metadata': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'nova', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'heat_engine': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-engine:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_container_server': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-container:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'swift_object_replicator': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'neutron_l3_agent': {u'start_order': 10, u'ulimit': [u'nofile=1024'], u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-l3-agent:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'cinder_scheduler': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-scheduler:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_conductor': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-conductor:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'heat_api_cfn': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-api-cfn:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'neutron_ovs_agent': {u'start_order': 10, u'ulimit': [u'nofile=1024'], u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-openvswitch-agent:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'cinder_api_cron': {u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_account_auditor': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-account:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'swift_container_replicator': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-container:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'swift_object_updater': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'swift_object_expirer': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'heat_api_cron': {u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_container_auditor': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-container:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'panko_api': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-panko-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'aodh_listener': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-listener:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'neutron_api': {u'start_order': 0, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_account_server': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-account:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'logrotate_crond': {u'image': u'192.168.24.1:8787/rhosp13/openstack-cron:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], u'net': u'none', u'privileged': True, u'restart': u'always'}}, 'key': u'step_4'}) > changed: [localhost] => (item={'value': {}, 'key': u'step_6'}) > > TASK [Create /var/lib/kolla/config_files directory] **************************** > changed: [localhost] > > TASK [Check if kolla_config.yaml file exists] ********************************** > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Write kolla config json files] ******************************************* > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/keystone.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_replicator.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/nova-scheduler ', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_scheduler.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/crond -n', u'permissions': [{u'owner': u'heat:heat', u'path': u'/var/log/heat', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cron.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_reaper.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_vnc_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_auditor.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_auditor.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src-panko/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', u'permissions': [{u'owner': u'root:ceilometer', u'path': u'/etc/panko', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'heat:heat', u'path': u'/var/log/heat', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_updater.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_replicator.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/neutron_ovs_agent_launcher.sh', u'permissions': [{u'owner': u'neutron:neutron', u'path': u'/var/log/neutron', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_ovs_agent.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/etc/libqb/force-filesystem-sockets', u'source': u'/dev/null', u'owner': u'root', u'perm': u'0644'}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src-tls/*', u'merge': True, u'optional': True, u'preserve_properties': True}], u'command': u'/usr/sbin/pacemaker_remoted', u'permissions': [{u'owner': u'rabbitmq:rabbitmq', u'path': u'/var/lib/rabbitmq', u'recurse': True}, {u'owner': u'rabbitmq:rabbitmq', u'path': u'/var/log/rabbitmq', u'recurse': True}, {u'owner': u'rabbitmq:rabbitmq', u'path': u'/etc/pki/tls/certs/rabbitmq.crt', u'optional': True, u'perm': u'0600'}, {u'owner': u'rabbitmq:rabbitmq', u'path': u'/etc/pki/tls/private/rabbitmq.key', u'optional': True, u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/rabbitmq.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', u'permissions': [{u'owner': u'cinder:cinder', u'path': u'/var/log/cinder', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_scheduler.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/gnocchi-metricd', u'permissions': [{u'owner': u'gnocchi:gnocchi', u'path': u'/var/log/gnocchi', u'recurse': True}, {u'owner': u'gnocchi:gnocchi', u'path': u'/etc/ceph/ceph.client.openstack.keyring', u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_metricd.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_replicator.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', u'permissions': [{u'owner': u'heat:heat', u'path': u'/var/log/heat', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_engine.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', u'permissions': [{u'owner': u'swift:swift', u'path': u'/var/cache/swift', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/swift_object_server.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': u'/var/lib/kolla/config_files/redis_tls_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'gnocchi:gnocchi', u'path': u'/var/log/gnocchi', u'recurse': True}, {u'owner': u'gnocchi:gnocchi', u'path': u'/etc/ceph/ceph.client.openstack.keyring', u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/iscsi/', u'source': u'/var/lib/kolla/config_files/src-iscsid/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', u'permissions': [{u'owner': u'cinder:cinder', u'path': u'/var/log/cinder', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_volume.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'panko:panko', u'path': u'/var/log/panko', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/panko_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_auditor.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', u'permissions': [{u'owner': u'neutron:neutron', u'path': u'/var/log/neutron', u'recurse': True}, {u'owner': u'neutron:neutron', u'path': u'/var/lib/neutron', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_l3_agent.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/aodh-listener', u'permissions': [{u'owner': u'aodh:aodh', u'path': u'/var/log/aodh', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_listener.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_server.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'apache:apache', u'path': u'/var/log/horizon/', u'recurse': True}, {u'owner': u'apache:apache', u'path': u'/etc/openstack-dashboard/', u'recurse': True}, {u'owner': u'apache:apache', u'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', u'recurse': False}, {u'owner': u'apache:apache', u'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', u'recurse': False}]}, 'key': u'/var/lib/kolla/config_files/horizon.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', u'permissions': [{u'owner': u'neutron:neutron', u'path': u'/var/log/neutron', u'recurse': True}, {u'owner': u'neutron:neutron', u'path': u'/var/lib/neutron', u'recurse': True}, {u'owner': u'neutron:neutron', u'path': u'/etc/pki/tls/certs/neutron.crt'}, {u'owner': u'neutron:neutron', u'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': u'/var/lib/kolla/config_files/neutron_dhcp.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', u'permissions': [{u'owner': u'glance:glance', u'path': u'/var/lib/glance', u'recurse': True}, {u'owner': u'glance:glance', u'path': u'/etc/ceph/ceph.client.openstack.keyring', u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/glance_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/etc/libqb/force-filesystem-sockets', u'source': u'/dev/null', u'owner': u'root', u'perm': u'0644'}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src-tls/*', u'merge': True, u'optional': True, u'preserve_properties': True}], u'command': u'/usr/sbin/pacemaker_remoted', u'permissions': [{u'owner': u'mysql:mysql', u'path': u'/var/log/mysql', u'recurse': True}, {u'owner': u'mysql:mysql', u'path': u'/etc/pki/tls/certs/mysql.crt', u'optional': True, u'perm': u'0600'}, {u'owner': u'mysql:mysql', u'path': u'/etc/pki/tls/private/mysql.key', u'optional': True, u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/mysql.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/crond -n', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api_cron.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', u'permissions': [{u'owner': u'gnocchi:gnocchi', u'path': u'/var/log/gnocchi', u'recurse': True}, {u'owner': u'gnocchi:gnocchi', u'path': u'/etc/ceph/ceph.client.openstack.keyring', u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_db_sync.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_placement.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/nova-api-metadata ', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_metadata.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/nova-consoleauth ', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_consoleauth.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_central.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', u'permissions': [{u'owner': u'neutron:neutron', u'path': u'/var/log/neutron', u'recurse': True}, {u'owner': u'neutron:neutron', u'path': u'/var/lib/neutron', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_metadata_agent.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': u'/var/lib/kolla/config_files/swift_rsync.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_server.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/crond -n', u'permissions': [{u'owner': u'cinder:cinder', u'path': u'/var/log/cinder', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api_cron.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'optional': True, u'preserve_properties': True}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src-tls/*', u'merge': True, u'optional': True, u'preserve_properties': True}], u'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', u'permissions': [{u'owner': u'haproxy:haproxy', u'path': u'/var/lib/haproxy', u'recurse': True}, {u'owner': u'haproxy:haproxy', u'path': u'/etc/pki/tls/certs/haproxy/*', u'optional': True, u'perm': u'0600'}, {u'owner': u'haproxy:haproxy', u'path': u'/etc/pki/tls/private/haproxy/*', u'optional': True, u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/haproxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/aodh-notifier', u'permissions': [{u'owner': u'aodh:aodh', u'path': u'/var/log/aodh', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_notifier.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'aodh:aodh', u'path': u'/var/log/aodh', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/crond -n', u'permissions': [{u'owner': u'keystone:keystone', u'path': u'/var/log/keystone', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/keystone_cron.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'heat:heat', u'path': u'/var/log/heat', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cfn.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/nova-conductor ', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_conductor.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/etc/iscsi/', u'source': u'/var/lib/kolla/config_files/src-iscsid/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/iscsid -f'}, 'key': u'/var/lib/kolla/config_files/iscsid.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/etc/libqb/force-filesystem-sockets', u'source': u'/dev/null', u'owner': u'root', u'perm': u'0644'}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'optional': True, u'preserve_properties': True}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src-tls/*', u'merge': True, u'optional': True, u'preserve_properties': True}], u'command': u'/usr/sbin/pacemaker_remoted', u'permissions': [{u'owner': u'redis:redis', u'path': u'/var/run/redis', u'recurse': True}, {u'owner': u'redis:redis', u'path': u'/var/lib/redis', u'recurse': True}, {u'owner': u'redis:redis', u'path': u'/var/log/redis', u'recurse': True}, {u'owner': u'redis:redis', u'path': u'/etc/pki/tls/certs/redis.crt', u'optional': True, u'perm': u'0600'}, {u'owner': u'redis:redis', u'path': u'/etc/pki/tls/private/redis.key', u'optional': True, u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/redis.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_expirer.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', u'permissions': [{u'owner': u'neutron:neutron', u'path': u'/var/log/neutron', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'cinder:cinder', u'path': u'/var/log/cinder', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/xinetd -dontfork'}, 'key': u'/var/lib/kolla/config_files/clustercheck.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/aodh-evaluator', u'permissions': [{u'owner': u'aodh:aodh', u'path': u'/var/log/aodh', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_evaluator.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_updater.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/gnocchi-statsd', u'permissions': [{u'owner': u'gnocchi:gnocchi', u'path': u'/var/log/gnocchi', u'recurse': True}, {u'owner': u'gnocchi:gnocchi', u'path': u'/etc/ceph/ceph.client.openstack.keyring', u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_statsd.json'}) > > TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ > > TASK [Check if docker_puppet_tasks.yaml file exists] *************************** > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Write docker-puppet-tasks json files] ************************************ > changed: [localhost] => (item={'value': [{u'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', u'config_volume': u'keystone_init_tasks', u'step_config': u'include ::tripleo::profile::base::keystone', u'config_image': u'192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4'}], 'key': u'step_3'}) > > TASK [Set host puppet debugging fact string] *********************************** > ok: [localhost] > > TASK [Write the config_step hieradata] ***************************************** > changed: [localhost] > > TASK [Run puppet host configuration for step 1] ******************************** > changed: [localhost] > > TASK [Debug output for task which failed: Run puppet host configuration for step 1] *** > ok: [localhost] => { > "failed_when_result": false, > "outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": [ > "Debug: Runtime environment: puppet_version=4.8.2, ruby_version=2.0.0, run_mode=user, default_encoding=UTF-8", > "Debug: Evicting cache entry for environment 'production'", > "Debug: Caching environment 'production' (ttl = 0 sec)", > "Debug: Loading external facts from /etc/puppet/modules/openstacklib/facts.d", > "Debug: Loading external facts from /var/lib/puppet/facts.d", > "Info: Loading facts", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /etc/puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /etc/puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /etc/puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /etc/puppet/modules/pacemaker/lib/facter/pcmk_is_remote.rb", > "Debug: Loading facts from /etc/puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /etc/puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /etc/puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /etc/puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/pacemaker/lib/facter/pcmk_is_remote.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Failed to load library 'cfpropertylist' for feature 'cfpropertylist'", > "Debug: Executing: '/usr/bin/rpm --version'", > "Debug: Executing: '/usr/bin/rpm -ql rpm'", > "Debug: Facter: value for agent_specified_environment is still nil", > "Debug: Facter: Found no suitable resolves of 1 for system32", > "Debug: Facter: value for system32 is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistid", > "Debug: Facter: value for lsbdistid is still nil", > "Debug: Facter: value for ipaddress6 is still nil", > "Debug: Facter: value for ec2_public_ipv4 is still nil", > "Debug: Facter: value for network_br_isolated is still nil", > "Debug: Facter: value for network_eth1 is still nil", > "Debug: Facter: value for network_eth2 is still nil", > "Debug: Facter: value for network_ovs_system is still nil", > "Debug: Facter: value for vlans is still nil", > "Debug: Facter: value for is_rsc is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_region", > "Debug: Facter: value for rsc_region is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_instance_id", > "Debug: Facter: value for rsc_instance_id is still nil", > "Debug: Facter: value for cfkey is still nil", > "Debug: Facter: Found no suitable resolves of 1 for processor", > "Debug: Facter: value for processor is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbminordistrelease", > "Debug: Facter: value for lsbminordistrelease is still nil", > "Debug: Facter: value for ipaddress6_br_ex is still nil", > "Debug: Facter: value for ipaddress_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_br_isolated is still nil", > "Debug: Facter: value for netmask_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_eth0 is still nil", > "Debug: Facter: value for ipaddress_eth1 is still nil", > "Debug: Facter: value for ipaddress6_eth1 is still nil", > "Debug: Facter: value for netmask_eth1 is still nil", > "Debug: Facter: value for ipaddress_eth2 is still nil", > "Debug: Facter: value for ipaddress6_eth2 is still nil", > "Debug: Facter: value for netmask_eth2 is still nil", > "Debug: Facter: value for ipaddress6_lo is still nil", > "Debug: Facter: value for macaddress_lo is still nil", > "Debug: Facter: value for ipaddress_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_ovs_system is still nil", > "Debug: Facter: value for netmask_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_vlan20 is still nil", > "Debug: Facter: value for ipaddress6_vlan30 is still nil", > "Debug: Facter: value for ipaddress6_vlan40 is still nil", > "Debug: Facter: value for ipaddress6_vlan50 is still nil", > "Debug: Facter: Found no suitable resolves of 1 for zonename", > "Debug: Facter: value for zonename is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbrelease", > "Debug: Facter: value for lsbrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbmajdistrelease", > "Debug: Facter: value for lsbmajdistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistcodename", > "Debug: Facter: value for lsbdistcodename is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistdescription", > "Debug: Facter: value for lsbdistdescription is still nil", > "Debug: Facter: Found no suitable resolves of 1 for xendomains", > "Debug: Facter: value for xendomains is still nil", > "Debug: Facter: Found no suitable resolves of 2 for swapencrypted", > "Debug: Facter: value for swapencrypted is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistrelease", > "Debug: Facter: value for lsbdistrelease is still nil", > "Debug: Facter: value for zpool_version is still nil", > "Debug: Facter: value for sshdsakey is still nil", > "Debug: Facter: value for sshfp_dsa is still nil", > "Debug: Facter: value for dhcp_servers is still nil", > "Debug: Facter: Found no suitable resolves of 1 for gce", > "Debug: Facter: value for gce is still nil", > "Debug: Facter: value for zfs_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iphostnumber", > "Debug: Facter: value for iphostnumber is still nil", > "Debug: Facter: value for rabbitmq_version is still nil", > "Debug: Facter: value for erl_ssl_path is still nil", > "Debug: Facter: Matching apachectl 'Server version: Apache/2.4.6 (Red Hat Enterprise Linux)", > "Server built: May 28 2018 16:19:32'", > "Debug: Facter: value for java_version is still nil", > "Debug: Facter: value for java_major_version is still nil", > "Debug: Facter: value for java_patch_level is still nil", > "Debug: Facter: value for java_default_home is still nil", > "Debug: Facter: value for java_libjvm_path is still nil", > "Debug: Facter: value for pe_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_major_version", > "Debug: Facter: value for pe_major_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_minor_version", > "Debug: Facter: value for pe_minor_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_patch_version", > "Debug: Facter: value for pe_patch_version is still nil", > "Debug: Puppet::Type::Service::ProviderNoop: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderOpenrc: file /bin/rc-status does not exist", > "Debug: Puppet::Type::Service::ProviderInit: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderLaunchd: file /bin/launchctl does not exist", > "Debug: Puppet::Type::Service::ProviderDebian: file /usr/sbin/update-rc.d does not exist", > "Debug: Puppet::Type::Service::ProviderUpstart: 0 confines (of 4) were true", > "Debug: Puppet::Type::Service::ProviderDaemontools: file /usr/bin/svc does not exist", > "Debug: Puppet::Type::Service::ProviderRunit: file /usr/bin/sv does not exist", > "Debug: Puppet::Type::Service::ProviderGentoo: file /sbin/rc-update does not exist", > "Debug: Puppet::Type::Service::ProviderOpenbsd: file /usr/sbin/rcctl does not exist", > "Debug: Puppet::Type::Package::ProviderSensu_gem: file /opt/sensu/embedded/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderTdagent: file /opt/td-agent/usr/sbin/td-agent-gem does not exist", > "Debug: Puppet::Type::Package::ProviderDpkg: file /usr/bin/dpkg does not exist", > "Debug: Puppet::Type::Package::ProviderFink: file /sw/bin/fink does not exist", > "Debug: Puppet::Type::Package::ProviderUp2date: file /usr/sbin/up2date-nox does not exist", > "Debug: Puppet::Type::Package::ProviderPacman: file /usr/bin/pacman does not exist", > "Debug: Puppet::Type::Package::ProviderApt: file /usr/bin/apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderAptitude: file /usr/bin/aptitude does not exist", > "Debug: Puppet::Type::Package::ProviderSun: file /usr/bin/pkginfo does not exist", > "Debug: Puppet::Type::Package::ProviderUrpmi: file urpmi does not exist", > "Debug: Puppet::Type::Package::ProviderSunfreeware: file pkg-get does not exist", > "Debug: Puppet::Type::Package::ProviderOpkg: file opkg does not exist", > "Debug: Puppet::Type::Package::ProviderPuppet_gem: file /opt/puppetlabs/puppet/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderDnf: file dnf does not exist", > "Debug: Puppet::Type::Package::ProviderOpenbsd: file pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderFreebsd: file /usr/sbin/pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderAix: file /usr/bin/lslpp does not exist", > "Debug: Puppet::Type::Package::ProviderNim: file /usr/sbin/nimclient does not exist", > "Debug: Puppet::Type::Package::ProviderPkgin: file pkgin does not exist", > "Debug: Puppet::Type::Package::ProviderZypper: file /usr/bin/zypper does not exist", > "Debug: Puppet::Type::Package::ProviderPortage: file /usr/bin/emerge does not exist", > "Debug: Puppet::Type::Package::ProviderAptrpm: file apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderPkg: file /usr/bin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderHpux: file /usr/sbin/swinstall does not exist", > "Debug: Puppet::Type::Package::ProviderPortupgrade: file /usr/local/sbin/portupgrade does not exist", > "Debug: Puppet::Type::Package::ProviderPkgng: file /usr/local/sbin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderTdnf: file tdnf does not exist", > "Debug: Puppet::Type::Package::ProviderRug: file /usr/bin/rug does not exist", > "Debug: Puppet::Type::Package::ProviderPorts: file /usr/local/sbin/portupgrade does not exist", > "Debug: Facter: value for cassandrarelease is still nil", > "Debug: Facter: value for cassandrapatchversion is still nil", > "Debug: Facter: value for cassandraminorversion is still nil", > "Debug: Facter: value for cassandramajorversion is still nil", > "Debug: Facter: value for mysqld_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for staging_windir", > "Debug: Facter: value for staging_windir is still nil", > "Debug: Facter: Found no suitable resolves of 2 for archive_windir", > "Debug: Facter: value for archive_windir is still nil", > "Debug: Facter: value for netmask6_ovs_system is still nil", > "Debug: Facter: value for libvirt_uuid is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iptables_persistent_version", > "Debug: Facter: value for iptables_persistent_version is still nil", > "Debug: hiera(): Hiera JSON backend starting", > "Debug: hiera(): Looking up step in JSON backend", > "Debug: hiera(): Looking for data source E89E6DC3-E97D-4F9D-8131-4637B37D46DF", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/E89E6DC3-E97D-4F9D-8131-4637B37D46DF.json, skipping", > "Debug: hiera(): Looking for data source heat_config_", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/heat_config_.json, skipping", > "Debug: hiera(): Looking for data source config_step", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/trusted_cas.pp' in environment production", > "Debug: Automatically imported tripleo::trusted_cas from tripleo/trusted_cas into production", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Debug: hiera(): Looking up lookup_options in JSON backend", > "Debug: hiera(): Looking for data source controller_extraconfig", > "Debug: hiera(): Looking for data source extraconfig", > "Debug: hiera(): Looking for data source service_names", > "Debug: hiera(): Looking for data source service_configs", > "Debug: hiera(): Looking for data source controller", > "Debug: hiera(): Looking for data source bootstrap_node", > "Debug: hiera(): Looking for data source all_nodes", > "Debug: hiera(): Looking for data source vip_data", > "Debug: hiera(): Looking for data source net_ip_map", > "Debug: hiera(): Looking for data source RedHat", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/RedHat.json, skipping", > "Debug: hiera(): Looking for data source neutron_bigswitch_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_bigswitch_data.json, skipping", > "Debug: hiera(): Looking for data source neutron_cisco_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_cisco_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_n1kv_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_n1kv_data.json, skipping", > "Debug: hiera(): Looking for data source midonet_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/midonet_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_aci_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_aci_data.json, skipping", > "Debug: hiera(): Looking up tripleo::trusted_cas::ca_map in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/docker.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::docker from tripleo/profile/base/docker into production", > "Debug: hiera(): Looking up tripleo::profile::base::docker::insecure_registries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::registry_mirror in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::docker_options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::additional_sockets in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::configure_network in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::network_options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::configure_storage in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::storage_options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::debug in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::deployment_user in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::insecure_registry_address in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::docker_namespace in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::insecure_registry in JSON backend", > "Debug: hiera(): Looking up deployment_user in JSON backend", > "Debug: importing '/etc/puppet/modules/sysctl/manifests/value.pp' in environment production", > "Debug: Automatically imported sysctl::value from sysctl/value into production", > "Debug: Resource group[docker] was not determined to be defined", > "Debug: Create new resource group[docker] with params {\"ensure\"=>\"present\"}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/kernel.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::kernel from tripleo/profile/base/kernel into production", > "Debug: hiera(): Looking up tripleo::profile::base::kernel::module_list in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::kernel::sysctl_settings in JSON backend", > "Debug: hiera(): Looking up kernel_modules in JSON backend", > "Debug: hiera(): Looking up sysctl_settings in JSON backend", > "Debug: importing '/etc/puppet/modules/kmod/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/kmod/manifests/load.pp' in environment production", > "Debug: Automatically imported kmod::load from kmod/load into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::database::mysql::client from tripleo/profile/base/database/mysql/client into production", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::enable_ssl in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::mysql_read_default_file in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::mysql_read_default_group in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::mysql_client_bind_address in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::ssl_ca in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::step in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::time::ntp from tripleo/profile/base/time/ntp into production", > "Debug: importing '/etc/puppet/modules/ntp/manifests/init.pp' in environment production", > "Debug: Automatically imported ntp from ntp into production", > "Debug: importing '/etc/puppet/modules/ntp/manifests/params.pp' in environment production", > "Debug: Automatically imported ntp::params from ntp/params into production", > "Debug: hiera(): Looking up ntp::autoupdate in JSON backend", > "Debug: hiera(): Looking up ntp::broadcastclient in JSON backend", > "Debug: hiera(): Looking up ntp::config in JSON backend", > "Debug: hiera(): Looking up ntp::config_dir in JSON backend", > "Debug: hiera(): Looking up ntp::config_file_mode in JSON backend", > "Debug: hiera(): Looking up ntp::config_template in JSON backend", > "Debug: hiera(): Looking up ntp::disable_auth in JSON backend", > "Debug: hiera(): Looking up ntp::disable_dhclient in JSON backend", > "Debug: hiera(): Looking up ntp::disable_kernel in JSON backend", > "Debug: hiera(): Looking up ntp::disable_monitor in JSON backend", > "Debug: hiera(): Looking up ntp::fudge in JSON backend", > "Debug: hiera(): Looking up ntp::driftfile in JSON backend", > "Debug: hiera(): Looking up ntp::leapfile in JSON backend", > "Debug: hiera(): Looking up ntp::logfile in JSON backend", > "Debug: hiera(): Looking up ntp::iburst_enable in JSON backend", > "Debug: hiera(): Looking up ntp::keys in JSON backend", > "Debug: hiera(): Looking up ntp::keys_enable in JSON backend", > "Debug: hiera(): Looking up ntp::keys_file in JSON backend", > "Debug: hiera(): Looking up ntp::keys_controlkey in JSON backend", > "Debug: hiera(): Looking up ntp::keys_requestkey in JSON backend", > "Debug: hiera(): Looking up ntp::keys_trusted in JSON backend", > "Debug: hiera(): Looking up ntp::minpoll in JSON backend", > "Debug: hiera(): Looking up ntp::maxpoll in JSON backend", > "Debug: hiera(): Looking up ntp::package_ensure in JSON backend", > "Debug: hiera(): Looking up ntp::package_manage in JSON backend", > "Debug: hiera(): Looking up ntp::package_name in JSON backend", > "Debug: hiera(): Looking up ntp::panic in JSON backend", > "Debug: hiera(): Looking up ntp::peers in JSON backend", > "Debug: hiera(): Looking up ntp::preferred_servers in JSON backend", > "Debug: hiera(): Looking up ntp::restrict in JSON backend", > "Debug: hiera(): Looking up ntp::interfaces in JSON backend", > "Debug: hiera(): Looking up ntp::interfaces_ignore in JSON backend", > "Debug: hiera(): Looking up ntp::servers in JSON backend", > "Debug: hiera(): Looking up ntp::service_enable in JSON backend", > "Debug: hiera(): Looking up ntp::service_ensure in JSON backend", > "Debug: hiera(): Looking up ntp::service_manage in JSON backend", > "Debug: hiera(): Looking up ntp::service_name in JSON backend", > "Debug: hiera(): Looking up ntp::service_provider in JSON backend", > "Debug: hiera(): Looking up ntp::stepout in JSON backend", > "Debug: hiera(): Looking up ntp::tinker in JSON backend", > "Debug: hiera(): Looking up ntp::tos in JSON backend", > "Debug: hiera(): Looking up ntp::tos_minclock in JSON backend", > "Debug: hiera(): Looking up ntp::tos_minsane in JSON backend", > "Debug: hiera(): Looking up ntp::tos_floor in JSON backend", > "Debug: hiera(): Looking up ntp::tos_ceiling in JSON backend", > "Debug: hiera(): Looking up ntp::tos_cohort in JSON backend", > "Debug: hiera(): Looking up ntp::udlc in JSON backend", > "Debug: hiera(): Looking up ntp::udlc_stratum in JSON backend", > "Debug: hiera(): Looking up ntp::ntpsigndsocket in JSON backend", > "Debug: hiera(): Looking up ntp::authprov in JSON backend", > "Debug: importing '/etc/puppet/modules/ntp/manifests/install.pp' in environment production", > "Debug: Automatically imported ntp::install from ntp/install into production", > "Debug: importing '/etc/puppet/modules/ntp/manifests/config.pp' in environment production", > "Debug: Automatically imported ntp::config from ntp/config into production", > "Debug: Scope(Class[Ntp::Config]): Retrieving template ntp/ntp.conf.erb", > "Debug: template[/etc/puppet/modules/ntp/templates/ntp.conf.erb]: Bound template variables for /etc/puppet/modules/ntp/templates/ntp.conf.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/ntp/templates/ntp.conf.erb]: Interpolated template /etc/puppet/modules/ntp/templates/ntp.conf.erb in 0.00 seconds", > "Debug: importing '/etc/puppet/modules/ntp/manifests/service.pp' in environment production", > "Debug: Automatically imported ntp::service from ntp/service into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/pacemaker.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::pacemaker from tripleo/profile/base/pacemaker into production", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_node_ips in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_authkey in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::encryption in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::enable_instanceha in JSON backend", > "Debug: hiera(): Looking up pcs_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_node_ips in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker_cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::instanceha in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up enable_fencing in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_node_names in JSON backend", > "Debug: hiera(): Looking up corosync_ipv6 in JSON backend", > "Debug: hiera(): Looking up corosync_token_timeout in JSON backend", > "Debug: hiera(): Looking up hacluster_pwd in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/init.pp' in environment production", > "Debug: Automatically imported pacemaker from pacemaker into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/params.pp' in environment production", > "Debug: Automatically imported pacemaker::params from pacemaker/params into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/install.pp' in environment production", > "Debug: Automatically imported pacemaker::install from pacemaker/install into production", > "Debug: hiera(): Looking up pacemaker::install::ensure in JSON backend", > "Debug: Resource package[pacemaker] was not determined to be defined", > "Debug: Create new resource package[pacemaker] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pcs] was not determined to be defined", > "Debug: Create new resource package[pcs] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[fence-agents-all] was not determined to be defined", > "Debug: Create new resource package[fence-agents-all] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pacemaker-libs] was not determined to be defined", > "Debug: Create new resource package[pacemaker-libs] with params {\"ensure\"=>\"present\"}", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/service.pp' in environment production", > "Debug: Automatically imported pacemaker::service from pacemaker/service into production", > "Debug: hiera(): Looking up pacemaker::service::ensure in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasstatus in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasrestart in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/corosync.pp' in environment production", > "Debug: Automatically imported pacemaker::corosync from pacemaker/corosync into production", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_members_rrp in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_name in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::manage_fw in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::pcsd_debug in JSON backend", > "Debug: template[inline]: Bound template variables for inline template in 0.00 seconds", > "Debug: template[inline]: Interpolated template inline template in 0.00 seconds", > "Debug: hiera(): Looking up docker_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/systemd/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/systemctl/daemon_reload.pp' in environment production", > "Debug: Automatically imported systemd::systemctl::daemon_reload from systemd/systemctl/daemon_reload into production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/unit_file.pp' in environment production", > "Debug: importing '/etc/puppet/modules/stdlib/manifests/init.pp' in environment production", > "Debug: Automatically imported systemd::unit_file from systemd/unit_file into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/stonith.pp' in environment production", > "Debug: Automatically imported pacemaker::stonith from pacemaker/stonith into production", > "Debug: hiera(): Looking up pacemaker::stonith::try_sleep in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/property.pp' in environment production", > "Debug: Automatically imported pacemaker::property from pacemaker/property into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/snmp.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::snmp from tripleo/profile/base/snmp into production", > "Debug: hiera(): Looking up tripleo::profile::base::snmp::snmpd_config in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::snmp::snmpd_password in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::snmp::snmpd_user in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::snmp::step in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/sshd.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::sshd from tripleo/profile/base/sshd into production", > "Debug: hiera(): Looking up tripleo::profile::base::sshd::bannertext in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::sshd::motd in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::sshd::options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::sshd::port in JSON backend", > "Debug: hiera(): Looking up ssh:server::options in JSON backend", > "Debug: importing '/etc/puppet/modules/ssh/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/ssh/manifests/server.pp' in environment production", > "Debug: Automatically imported ssh::server from ssh/server into production", > "Debug: importing '/etc/puppet/modules/ssh/manifests/params.pp' in environment production", > "Debug: Automatically imported ssh::params from ssh/params into production", > "Debug: hiera(): Looking up ssh::server::ensure in JSON backend", > "Debug: hiera(): Looking up ssh::server::validate_sshd_file in JSON backend", > "Debug: hiera(): Looking up ssh::server::use_augeas in JSON backend", > "Debug: hiera(): Looking up ssh::server::options_absent in JSON backend", > "Debug: hiera(): Looking up ssh::server::match_block in JSON backend", > "Debug: hiera(): Looking up ssh::server::use_issue_net in JSON backend", > "Debug: hiera(): Looking up ssh::server::options in JSON backend", > "Debug: importing '/etc/puppet/modules/ssh/manifests/server/install.pp' in environment production", > "Debug: Automatically imported ssh::server::install from ssh/server/install into production", > "Debug: importing '/etc/puppet/modules/ssh/manifests/server/config.pp' in environment production", > "Debug: Automatically imported ssh::server::config from ssh/server/config into production", > "Debug: importing '/etc/puppet/modules/concat/manifests/init.pp' in environment production", > "Debug: Automatically imported concat from concat into production", > "Debug: Scope(Class[Ssh::Server::Config]): Retrieving template ssh/sshd_config.erb", > "Debug: template[/etc/puppet/modules/ssh/templates/sshd_config.erb]: Bound template variables for /etc/puppet/modules/ssh/templates/sshd_config.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/ssh/templates/sshd_config.erb]: Interpolated template /etc/puppet/modules/ssh/templates/sshd_config.erb in 0.00 seconds", > "Debug: importing '/etc/puppet/modules/concat/manifests/fragment.pp' in environment production", > "Debug: Automatically imported concat::fragment from concat/fragment into production", > "Debug: importing '/etc/puppet/modules/ssh/manifests/server/service.pp' in environment production", > "Debug: Automatically imported ssh::server::service from ssh/server/service into production", > "Debug: hiera(): Looking up ssh::server::service::ensure in JSON backend", > "Debug: hiera(): Looking up ssh::server::service::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/timezone/manifests/init.pp' in environment production", > "Debug: Automatically imported timezone from timezone into production", > "Debug: hiera(): Looking up timezone::timezone in JSON backend", > "Debug: hiera(): Looking up timezone::ensure in JSON backend", > "Debug: hiera(): Looking up timezone::hwutc in JSON backend", > "Debug: hiera(): Looking up timezone::autoupgrade in JSON backend", > "Debug: hiera(): Looking up timezone::notify_services in JSON backend", > "Debug: hiera(): Looking up timezone::package in JSON backend", > "Debug: hiera(): Looking up timezone::zoneinfo_dir in JSON backend", > "Debug: hiera(): Looking up timezone::localtime_file in JSON backend", > "Debug: hiera(): Looking up timezone::timezone_file in JSON backend", > "Debug: hiera(): Looking up timezone::timezone_file_template in JSON backend", > "Debug: hiera(): Looking up timezone::timezone_file_supports_comment in JSON backend", > "Debug: hiera(): Looking up timezone::timezone_update in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall.pp' in environment production", > "Debug: Automatically imported tripleo::firewall from tripleo/firewall into production", > "Debug: hiera(): Looking up tripleo::firewall::manage_firewall in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_chains in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::purge_firewall_chains in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::purge_firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_pre_extras in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_post_extras in JSON backend", > "Debug: Resource class[tripleo::firewall::pre] was not determined to be defined", > "Debug: Create new resource class[tripleo::firewall::pre] with params {\"firewall_settings\"=>{}}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/pre.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::pre from tripleo/firewall/pre into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/init.pp' in environment production", > "Debug: Automatically imported firewall from firewall into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/params.pp' in environment production", > "Debug: Automatically imported firewall::params from firewall/params into production", > "Debug: hiera(): Looking up firewall::ensure in JSON backend", > "Debug: hiera(): Looking up firewall::ensure_v6 in JSON backend", > "Debug: hiera(): Looking up firewall::pkg_ensure in JSON backend", > "Debug: hiera(): Looking up firewall::service_name in JSON backend", > "Debug: hiera(): Looking up firewall::service_name_v6 in JSON backend", > "Debug: hiera(): Looking up firewall::package_name in JSON backend", > "Debug: hiera(): Looking up firewall::ebtables_manage in JSON backend", > "Debug: importing '/etc/puppet/modules/firewall/manifests/linux.pp' in environment production", > "Debug: Automatically imported firewall::linux from firewall/linux into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/linux/redhat.pp' in environment production", > "Debug: Automatically imported firewall::linux::redhat from firewall/linux/redhat into production", > "Debug: hiera(): Looking up firewall::linux::redhat::package_ensure in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/rule.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::rule from tripleo/firewall/rule into production", > "Debug: Resource class[tripleo::firewall::post] was not determined to be defined", > "Debug: Create new resource class[tripleo::firewall::post] with params {\"firewall_settings\"=>{}}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/post.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::post from tripleo/firewall/post into production", > "Debug: hiera(): Looking up tripleo::firewall::post::debug in JSON backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Debug: hiera(): Looking up service_names in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/service_rules.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::service_rules from tripleo/firewall/service_rules into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/packages.pp' in environment production", > "Debug: Automatically imported tripleo::packages from tripleo/packages into production", > "Debug: hiera(): Looking up tripleo::packages::enable_install in JSON backend", > "Debug: hiera(): Looking up tripleo::packages::enable_upgrade in JSON backend", > "Debug: importing '/etc/puppet/modules/stdlib/manifests/stages.pp' in environment production", > "Debug: Automatically imported stdlib::stages from stdlib/stages into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/tuned.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::tuned from tripleo/profile/base/tuned into production", > "Debug: hiera(): Looking up tripleo::profile::base::tuned::profile in JSON backend", > "Debug: Resource package[tuned] was not determined to be defined", > "Debug: Create new resource package[tuned] with params {\"ensure\"=>\"present\"}", > "Debug: Scope(Kmod::Load[nf_conntrack]): Retrieving template kmod/redhat.modprobe.erb", > "Debug: template[/etc/puppet/modules/kmod/templates/redhat.modprobe.erb]: Bound template variables for /etc/puppet/modules/kmod/templates/redhat.modprobe.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/kmod/templates/redhat.modprobe.erb]: Interpolated template /etc/puppet/modules/kmod/templates/redhat.modprobe.erb in 0.00 seconds", > "Debug: Scope(Kmod::Load[nf_conntrack_proto_sctp]): Retrieving template kmod/redhat.modprobe.erb", > "Debug: importing '/etc/puppet/modules/sysctl/manifests/base.pp' in environment production", > "Debug: Automatically imported sysctl::base from sysctl/base into production", > "Debug: hiera(): Looking up systemd::service_limits in JSON backend", > "Debug: hiera(): Looking up systemd::manage_resolved in JSON backend", > "Debug: hiera(): Looking up systemd::resolved_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_networkd in JSON backend", > "Debug: hiera(): Looking up systemd::networkd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_timesyncd in JSON backend", > "Debug: hiera(): Looking up systemd::timesyncd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::ntp_server in JSON backend", > "Debug: hiera(): Looking up systemd::fallback_ntp_server in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_evaluator.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_evaluator::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_listener.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_listener::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_notifier.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_notifier::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ca_certs.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ca_certs::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_api_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_api_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_collector_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_collector_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_expirer_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_expirer_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_central.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_central::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_notification.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_notification::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mgr.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mgr::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mon.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mon::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_scheduler.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_scheduler::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_volume.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_volume::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.clustercheck.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::clustercheck::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.docker.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::docker::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_registry_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_registry_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_metricd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_metricd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_statsd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_statsd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.haproxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cloudwatch_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cloudwatch_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cfn.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cfn::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_engine.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_engine::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.horizon.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::horizon::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.iscsid.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::iscsid::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.kernel.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::kernel::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.keystone.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::keystone::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.memcached.firewall_rules in JSON backend", > "Debug: hiera(): Looking up memcached_network in JSON backend", > "Debug: hiera(): Looking up tripleo::memcached::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mongodb_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mongodb_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql_client.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql_client::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_plugin_ml2.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_plugin_ml2::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_dhcp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_dhcp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_l3.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_l3::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_metadata.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_metadata::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_ovs_agent.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_ovs_agent::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_conductor.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_conductor::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_consoleauth.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_consoleauth::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_metadata.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_metadata::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_placement.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_placement::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_scheduler.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_scheduler::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_vnc_proxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_vnc_proxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ntp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ntp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.logrotate_crond.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::logrotate_crond::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.pacemaker.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::pacemaker::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.panko_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::panko_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.rabbitmq.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::rabbitmq::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.redis.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::redis::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.snmp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up snmpd_network in JSON backend", > "Debug: hiera(): Looking up tripleo::snmp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.sshd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::sshd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_proxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_proxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_ringbuilder.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_ringbuilder::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_storage.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_storage::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.timezone.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::timezone::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_firewall.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_firewall::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_packages.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_packages::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tuned.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tuned::firewall_rules in JSON backend", > "Debug: Adding relationship from Sysctl::Value[net.ipv4.ip_forward] to Package[docker] with 'before'", > "Debug: Adding relationship from File[/etc/systemd/system/docker.service.d] to File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf] with 'before'", > "Debug: Adding relationship from File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf] to Exec[systemd daemon-reload] with 'notify'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[fs.inotify.max_user_instances] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[fs.suid_dumpable] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[kernel.dmesg_restrict] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[kernel.pid_max] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.core.netdev_max_backlog] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.all.arp_accept] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.all.log_martians] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.all.secure_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.all.send_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.default.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.default.log_martians] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.default.secure_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.default.send_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.ip_forward] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.neigh.default.gc_thresh1] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.neigh.default.gc_thresh2] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.neigh.default.gc_thresh3] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.tcp_keepalive_intvl] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.tcp_keepalive_probes] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.tcp_keepalive_time] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.all.accept_ra] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.all.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.all.autoconf] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.all.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.default.accept_ra] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.default.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.default.autoconf] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.default.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.lo.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.netfilter.nf_conntrack_max] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.nf_conntrack_max] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[fs.inotify.max_user_instances] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[fs.suid_dumpable] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[kernel.dmesg_restrict] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[kernel.pid_max] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.core.netdev_max_backlog] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.all.arp_accept] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.all.log_martians] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.all.secure_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.all.send_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.default.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.default.log_martians] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.default.secure_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.default.send_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.ip_forward] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.neigh.default.gc_thresh1] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.neigh.default.gc_thresh2] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.neigh.default.gc_thresh3] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.tcp_keepalive_intvl] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.tcp_keepalive_probes] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.tcp_keepalive_time] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.all.accept_ra] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.all.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.all.autoconf] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.all.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.default.accept_ra] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.default.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.default.autoconf] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.default.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.lo.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.netfilter.nf_conntrack_max] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.nf_conntrack_max] with 'before'", > "Debug: Adding relationship from Anchor[ntp::begin] to Class[Ntp::Install] with 'before'", > "Debug: Adding relationship from Class[Ntp::Install] to Class[Ntp::Config] with 'before'", > "Debug: Adding relationship from Class[Ntp::Config] to Class[Ntp::Service] with 'notify'", > "Debug: Adding relationship from Class[Ntp::Service] to Anchor[ntp::end] with 'before'", > "Debug: Adding relationship from Service[pcsd] to Exec[auth-successful-across-all-nodes] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[Create Cluster tripleo_cluster] to Exec[Start Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[Start Cluster tripleo_cluster] to Service[corosync] with 'before'", > "Debug: Adding relationship from Exec[Start Cluster tripleo_cluster] to Service[pacemaker] with 'before'", > "Debug: Adding relationship from Service[corosync] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Service[pacemaker] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from Exec[Create Cluster tripleo_cluster] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker-authkey] to Exec[Start Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property--stonith-enabled] with 'before'", > "Debug: Adding relationship from Class[Pacemaker] to Class[Pacemaker::Corosync] with 'before'", > "Debug: Adding relationship from File[/etc/systemd/system/resource-agents-deps.target.wants] to Systemd::Unit_file[docker.service] with 'before'", > "Debug: Adding relationship from Systemd::Unit_file[docker.service] to Class[Systemd::Systemctl::Daemon_reload] with 'notify'", > "Debug: Adding relationship from Anchor[ssh::server::start] to Class[Ssh::Server::Install] with 'before'", > "Debug: Adding relationship from Class[Ssh::Server::Install] to Class[Ssh::Server::Config] with 'before'", > "Debug: Adding relationship from Class[Ssh::Server::Config] to Class[Ssh::Server::Service] with 'notify'", > "Debug: Adding relationship from Class[Ssh::Server::Service] to Anchor[ssh::server::end] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[docker] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[chronyd] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[ntp] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[pcsd] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[corosync] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[pacemaker] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[sshd] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[firewalld] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[iptables] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[ip6tables] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Stage[runtime] to Stage[setup_infra] with 'before'", > "Debug: Adding relationship from Stage[setup_infra] to Stage[deploy_infra] with 'before'", > "Debug: Adding relationship from Stage[deploy_infra] to Stage[setup_app] with 'before'", > "Debug: Adding relationship from Stage[setup_app] to Stage[deploy_app] with 'before'", > "Debug: Adding relationship from Stage[deploy_app] to Stage[deploy] with 'before'", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.29 seconds", > "Debug: /File[/etc/systemd/system/docker.service.d]/seluser: Found seluser default 'system_u' for /etc/systemd/system/docker.service.d", > "Debug: /File[/etc/systemd/system/docker.service.d]/selrole: Found selrole default 'object_r' for /etc/systemd/system/docker.service.d", > "Debug: /File[/etc/systemd/system/docker.service.d]/seltype: Found seltype default 'container_unit_file_t' for /etc/systemd/system/docker.service.d", > "Debug: /File[/etc/systemd/system/docker.service.d]/selrange: Found selrange default 's0' for /etc/systemd/system/docker.service.d", > "Debug: /File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/seluser: Found seluser default 'system_u' for /etc/systemd/system/docker.service.d/99-unset-mountflags.conf", > "Debug: /File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/selrole: Found selrole default 'object_r' for /etc/systemd/system/docker.service.d/99-unset-mountflags.conf", > "Debug: /File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/seltype: Found seltype default 'container_unit_file_t' for /etc/systemd/system/docker.service.d/99-unset-mountflags.conf", > "Debug: /File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/selrange: Found selrange default 's0' for /etc/systemd/system/docker.service.d/99-unset-mountflags.conf", > "Debug: /File[/etc/docker/daemon.json]/seluser: Found seluser default 'system_u' for /etc/docker/daemon.json", > "Debug: /File[/etc/docker/daemon.json]/selrole: Found selrole default 'object_r' for /etc/docker/daemon.json", > "Debug: /File[/etc/docker/daemon.json]/seltype: Found seltype default 'container_config_t' for /etc/docker/daemon.json", > "Debug: /File[/etc/docker/daemon.json]/selrange: Found selrange default 's0' for /etc/docker/daemon.json", > "Debug: /File[/var/lib/openstack]/seluser: Found seluser default 'system_u' for /var/lib/openstack", > "Debug: /File[/var/lib/openstack]/selrole: Found selrole default 'object_r' for /var/lib/openstack", > "Debug: /File[/var/lib/openstack]/seltype: Found seltype default 'var_lib_t' for /var/lib/openstack", > "Debug: /File[/var/lib/openstack]/selrange: Found selrange default 's0' for /var/lib/openstack", > "Debug: /File[/etc/ntp.conf]/seluser: Found seluser default 'system_u' for /etc/ntp.conf", > "Debug: /File[/etc/ntp.conf]/selrole: Found selrole default 'object_r' for /etc/ntp.conf", > "Debug: /File[/etc/ntp.conf]/seltype: Found seltype default 'net_conf_t' for /etc/ntp.conf", > "Debug: /File[/etc/ntp.conf]/selrange: Found selrange default 's0' for /etc/ntp.conf", > "Debug: /File[etc-pacemaker]/seluser: Found seluser default 'system_u' for /etc/pacemaker", > "Debug: /File[etc-pacemaker]/selrole: Found selrole default 'object_r' for /etc/pacemaker", > "Debug: /File[etc-pacemaker]/seltype: Found seltype default 'etc_t' for /etc/pacemaker", > "Debug: /File[etc-pacemaker]/selrange: Found selrange default 's0' for /etc/pacemaker", > "Debug: /File[etc-pacemaker-authkey]/seluser: Found seluser default 'system_u' for /etc/pacemaker/authkey", > "Debug: /File[etc-pacemaker-authkey]/selrole: Found selrole default 'object_r' for /etc/pacemaker/authkey", > "Debug: /File[etc-pacemaker-authkey]/seltype: Found seltype default 'etc_t' for /etc/pacemaker/authkey", > "Debug: /File[etc-pacemaker-authkey]/selrange: Found selrange default 's0' for /etc/pacemaker/authkey", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants]/seluser: Found seluser default 'system_u' for /etc/systemd/system/resource-agents-deps.target.wants", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants]/selrole: Found selrole default 'object_r' for /etc/systemd/system/resource-agents-deps.target.wants", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants]/seltype: Found seltype default 'systemd_unit_file_t' for /etc/systemd/system/resource-agents-deps.target.wants", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants]/selrange: Found selrange default 's0' for /etc/systemd/system/resource-agents-deps.target.wants", > "Debug: /File[/etc/localtime]/seluser: Found seluser default 'system_u' for /etc/localtime", > "Debug: /File[/etc/localtime]/selrole: Found selrole default 'object_r' for /etc/localtime", > "Debug: /File[/etc/localtime]/seltype: Found seltype default 'locale_t' for /etc/localtime", > "Debug: /File[/etc/localtime]/selrange: Found selrange default 's0' for /etc/localtime", > "Debug: /File[/etc/sysconfig/iptables]/seluser: Found seluser default 'system_u' for /etc/sysconfig/iptables", > "Debug: /File[/etc/sysconfig/iptables]/selrole: Found selrole default 'object_r' for /etc/sysconfig/iptables", > "Debug: /File[/etc/sysconfig/iptables]/seltype: Found seltype default 'system_conf_t' for /etc/sysconfig/iptables", > "Debug: /File[/etc/sysconfig/iptables]/selrange: Found selrange default 's0' for /etc/sysconfig/iptables", > "Debug: /File[/etc/sysconfig/ip6tables]/seluser: Found seluser default 'system_u' for /etc/sysconfig/ip6tables", > "Debug: /File[/etc/sysconfig/ip6tables]/selrole: Found selrole default 'object_r' for /etc/sysconfig/ip6tables", > "Debug: /File[/etc/sysconfig/ip6tables]/seltype: Found seltype default 'system_conf_t' for /etc/sysconfig/ip6tables", > "Debug: /File[/etc/sysconfig/ip6tables]/selrange: Found selrange default 's0' for /etc/sysconfig/ip6tables", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack.modules]/seluser: Found seluser default 'system_u' for /etc/sysconfig/modules/nf_conntrack.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack.modules]/selrole: Found selrole default 'object_r' for /etc/sysconfig/modules/nf_conntrack.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack.modules]/seltype: Found seltype default 'etc_t' for /etc/sysconfig/modules/nf_conntrack.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack.modules]/selrange: Found selrange default 's0' for /etc/sysconfig/modules/nf_conntrack.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/seluser: Found seluser default 'system_u' for /etc/sysconfig/modules/nf_conntrack_proto_sctp.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/selrole: Found selrole default 'object_r' for /etc/sysconfig/modules/nf_conntrack_proto_sctp.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/seltype: Found seltype default 'etc_t' for /etc/sysconfig/modules/nf_conntrack_proto_sctp.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/selrange: Found selrange default 's0' for /etc/sysconfig/modules/nf_conntrack_proto_sctp.modules", > "Debug: /File[/etc/sysctl.conf]/seluser: Found seluser default 'system_u' for /etc/sysctl.conf", > "Debug: /File[/etc/sysctl.conf]/selrole: Found selrole default 'object_r' for /etc/sysctl.conf", > "Debug: /File[/etc/sysctl.conf]/seltype: Found seltype default 'system_conf_t' for /etc/sysctl.conf", > "Debug: /File[/etc/sysctl.conf]/selrange: Found selrange default 's0' for /etc/sysctl.conf", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/seluser: Found seluser default 'system_u' for /etc/systemd/system/resource-agents-deps.target.wants/docker.service", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/selrole: Found selrole default 'object_r' for /etc/systemd/system/resource-agents-deps.target.wants/docker.service", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/seltype: Found seltype default 'systemd_unit_file_t' for /etc/systemd/system/resource-agents-deps.target.wants/docker.service", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/selrange: Found selrange default 's0' for /etc/systemd/system/resource-agents-deps.target.wants/docker.service", > "Debug: /Firewall[000 accept related established rules ipv4]: [validate]", > "Debug: /Firewall[000 accept related established rules ipv6]: [validate]", > "Debug: /Firewall[001 accept all icmp ipv4]: [validate]", > "Debug: /Firewall[001 accept all icmp ipv6]: [validate]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: [validate]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: [validate]", > "Debug: /Firewall[003 accept ssh ipv4]: [validate]", > "Debug: /Firewall[003 accept ssh ipv6]: [validate]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: [validate]", > "Debug: /Firewall[998 log all ipv4]: [validate]", > "Debug: /Firewall[998 log all ipv6]: [validate]", > "Debug: /Firewall[999 drop all ipv4]: [validate]", > "Debug: /Firewall[999 drop all ipv6]: [validate]", > "Debug: /Firewall[128 aodh-api ipv4]: [validate]", > "Debug: /Firewall[128 aodh-api ipv6]: [validate]", > "Debug: /Firewall[113 ceph_mgr ipv4]: [validate]", > "Debug: /Firewall[113 ceph_mgr ipv6]: [validate]", > "Debug: /Firewall[110 ceph_mon ipv4]: [validate]", > "Debug: /Firewall[110 ceph_mon ipv6]: [validate]", > "Debug: /Firewall[119 cinder ipv4]: [validate]", > "Debug: /Firewall[119 cinder ipv6]: [validate]", > "Debug: /Firewall[120 iscsi initiator ipv4]: [validate]", > "Debug: /Firewall[120 iscsi initiator ipv6]: [validate]", > "Debug: /Firewall[112 glance_api ipv4]: [validate]", > "Debug: /Firewall[112 glance_api ipv6]: [validate]", > "Debug: /Firewall[129 gnocchi-api ipv4]: [validate]", > "Debug: /Firewall[129 gnocchi-api ipv6]: [validate]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: [validate]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: [validate]", > "Debug: /Firewall[107 haproxy stats ipv4]: [validate]", > "Debug: /Firewall[107 haproxy stats ipv6]: [validate]", > "Debug: /Firewall[125 heat_api ipv4]: [validate]", > "Debug: /Firewall[125 heat_api ipv6]: [validate]", > "Debug: /Firewall[125 heat_cfn ipv4]: [validate]", > "Debug: /Firewall[125 heat_cfn ipv6]: [validate]", > "Debug: /Firewall[127 horizon ipv4]: [validate]", > "Debug: /Firewall[127 horizon ipv6]: [validate]", > "Debug: /Firewall[111 keystone ipv4]: [validate]", > "Debug: /Firewall[111 keystone ipv6]: [validate]", > "Debug: /Firewall[121 memcached ipv4]: [validate]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: [validate]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: [validate]", > "Debug: /Firewall[114 neutron api ipv4]: [validate]", > "Debug: /Firewall[114 neutron api ipv6]: [validate]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: [validate]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: [validate]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: [validate]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: [validate]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: [validate]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: [validate]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: [validate]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: [validate]", > "Debug: /Firewall[136 neutron gre networks ipv4]: [validate]", > "Debug: /Firewall[136 neutron gre networks ipv6]: [validate]", > "Debug: /Firewall[113 nova_api ipv4]: [validate]", > "Debug: /Firewall[113 nova_api ipv6]: [validate]", > "Debug: /Firewall[138 nova_placement ipv4]: [validate]", > "Debug: /Firewall[138 nova_placement ipv6]: [validate]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: [validate]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: [validate]", > "Debug: /Firewall[105 ntp ipv4]: [validate]", > "Debug: /Firewall[105 ntp ipv6]: [validate]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: [validate]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: [validate]", > "Debug: /Firewall[131 pacemaker udp ipv4]: [validate]", > "Debug: /Firewall[131 pacemaker udp ipv6]: [validate]", > "Debug: /Firewall[140 panko-api ipv4]: [validate]", > "Debug: /Firewall[140 panko-api ipv6]: [validate]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: [validate]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: [validate]", > "Debug: /Firewall[108 redis-bundle ipv4]: [validate]", > "Debug: /Firewall[108 redis-bundle ipv6]: [validate]", > "Debug: /Firewall[122 swift proxy ipv4]: [validate]", > "Debug: /Firewall[122 swift proxy ipv6]: [validate]", > "Debug: /Firewall[123 swift storage ipv4]: [validate]", > "Debug: /Firewall[123 swift storage ipv6]: [validate]", > "Debug: Creating default schedules", > "Debug: /File[/etc/ssh/sshd_config]/seluser: Found seluser default 'system_u' for /etc/ssh/sshd_config", > "Debug: /File[/etc/ssh/sshd_config]/selrole: Found selrole default 'object_r' for /etc/ssh/sshd_config", > "Debug: /File[/etc/ssh/sshd_config]/seltype: Found seltype default 'etc_t' for /etc/ssh/sshd_config", > "Debug: /File[/etc/ssh/sshd_config]/selrange: Found selrange default 's0' for /etc/ssh/sshd_config", > "Info: Applying configuration version '1534432872'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]/require: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]/before: subscribes to File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/notify: subscribes to Exec[systemd daemon-reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Exec[systemd daemon-reload]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]/require: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]/subscribe: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]/subscribe: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/docker/daemon.json]/require: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-mirror]/require: subscribes to File[/etc/docker/daemon.json]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-mirror]/subscribe: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-mirror]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]/require: subscribes to File[/etc/docker/daemon.json]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]/subscribe: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]/require: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]/require: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/var/lib/openstack]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/before: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/before: subscribes to Augeas[tripleo-mysql-client-conf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/before: subscribes to Class[Ntp]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Ntp/Anchor[ntp::begin]/before: subscribes to Class[Ntp::Install]", > "Debug: /Stage[main]/Ntp::Install/before: subscribes to Class[Ntp::Config]", > "Debug: /Stage[main]/Ntp::Config/notify: subscribes to Class[Ntp::Service]", > "Debug: /Stage[main]/Ntp::Service/before: subscribes to Anchor[ntp::end]", > "Debug: /Stage[main]/Ntp::Service/Service[ntp]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker/before: subscribes to Class[Pacemaker::Corosync]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Exec[auth-successful-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/before: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/notify: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/notify: subscribes to Exec[reauthenticate-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/require: subscribes to User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/before: subscribes to Exec[Start Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/require: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/before: subscribes to Service[corosync]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/before: subscribes to Service[pacemaker]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/before: subscribes to Exec[Start Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property--stonith-enabled]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/before: subscribes to Systemd::Unit_file[docker.service]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/before: subscribes to Class[Pacemaker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Ssh::Server::Install/before: subscribes to Class[Ssh::Server::Config]", > "Debug: /Stage[main]/Ssh::Server::Config/notify: subscribes to Class[Ssh::Server::Service]", > "Debug: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/notify: subscribes to Service[sshd]", > "Debug: /Stage[main]/Ssh::Server::Service/before: subscribes to Anchor[ssh::server::end]", > "Debug: /Stage[main]/Ssh::Server::Service/Service[sshd]/require: subscribes to Class[Ssh::Server::Config]", > "Debug: /Stage[main]/Ssh::Server::Service/Service[sshd]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Ssh::Server/Anchor[ssh::server::start]/before: subscribes to Class[Ssh::Server::Install]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/require: subscribes to Package[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/require: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/subscribe: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/before: subscribes to Service[ip6tables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[setup]/before: subscribes to Stage[main]", > "Debug: /Stage[runtime]/require: subscribes to Stage[main]", > "Debug: /Stage[runtime]/before: subscribes to Stage[setup_infra]", > "Debug: /Stage[setup_infra]/before: subscribes to Stage[deploy_infra]", > "Debug: /Stage[deploy_infra]/before: subscribes to Stage[setup_app]", > "Debug: /Stage[setup_app]/before: subscribes to Stage[deploy_app]", > "Debug: /Stage[deploy_app]/before: subscribes to Stage[deploy]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Tuned/Exec[tuned-adm]/require: subscribes to Package[tuned]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[fs.inotify.max_user_instances]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[fs.suid_dumpable]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[kernel.dmesg_restrict]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[kernel.pid_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.core.netdev_max_backlog]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.all.arp_accept]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.all.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.all.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.all.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.default.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.default.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.default.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.ip_forward]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh1]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh2]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh3]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_intvl]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_probes]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_time]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.all.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.all.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.all.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.all.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.default.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.default.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.default.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.lo.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.netfilter.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[fs.inotify.max_user_instances]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[fs.suid_dumpable]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[kernel.dmesg_restrict]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[kernel.pid_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.core.netdev_max_backlog]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.all.arp_accept]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.all.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.all.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.all.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.default.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.default.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.default.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.ip_forward]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh1]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh2]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh3]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_intvl]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_probes]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_time]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.all.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.all.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.all.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.all.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.default.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.default.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.default.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.lo.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.netfilter.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/before: subscribes to Sysctl_runtime[fs.inotify.max_user_instances]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/before: subscribes to Sysctl_runtime[fs.suid_dumpable]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/before: subscribes to Sysctl_runtime[kernel.dmesg_restrict]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/before: subscribes to Sysctl_runtime[kernel.pid_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/before: subscribes to Sysctl_runtime[net.core.netdev_max_backlog]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/before: subscribes to Sysctl_runtime[net.ipv4.conf.all.arp_accept]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/before: subscribes to Sysctl_runtime[net.ipv4.conf.all.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/before: subscribes to Sysctl_runtime[net.ipv4.conf.all.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/before: subscribes to Sysctl_runtime[net.ipv4.conf.all.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/before: subscribes to Sysctl_runtime[net.ipv4.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/before: subscribes to Sysctl_runtime[net.ipv4.conf.default.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/before: subscribes to Sysctl_runtime[net.ipv4.conf.default.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/before: subscribes to Sysctl_runtime[net.ipv4.conf.default.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[net.ipv4.ip_forward]/before: subscribes to Sysctl_runtime[net.ipv4.ip_forward]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/before: subscribes to Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/before: subscribes to Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/before: subscribes to Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/before: subscribes to Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/before: subscribes to Sysctl_runtime[net.ipv4.tcp_keepalive_probes]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/before: subscribes to Sysctl_runtime[net.ipv4.tcp_keepalive_time]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/before: subscribes to Sysctl_runtime[net.ipv6.conf.all.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/before: subscribes to Sysctl_runtime[net.ipv6.conf.all.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/before: subscribes to Sysctl_runtime[net.ipv6.conf.all.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/before: subscribes to Sysctl_runtime[net.ipv6.conf.all.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/before: subscribes to Sysctl_runtime[net.ipv6.conf.default.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/before: subscribes to Sysctl_runtime[net.ipv6.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/before: subscribes to Sysctl_runtime[net.ipv6.conf.default.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/before: subscribes to Sysctl_runtime[net.ipv6.conf.default.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/before: subscribes to Sysctl_runtime[net.ipv6.conf.lo.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/before: subscribes to Sysctl_runtime[net.netfilter.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/before: subscribes to Sysctl_runtime[net.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/Concat_file[/etc/ssh/sshd_config]/before: subscribes to File[/etc/ssh/sshd_config]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Adding autorequire relationship with File[/etc/systemd/system/resource-agents-deps.target.wants]", > "Debug: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/Concat_file[/etc/ssh/sshd_config]: Skipping automatic relationship with File[/etc/ssh/sshd_config]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[998 log all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[998 log all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller1]/ensure: created", > "Debug: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller1]: The container Class[Main] will propagate my refresh event", > "Debug: Class[Main]: The container Stage[main] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/File[/var/lib/openstack]/ensure: created", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/File[/var/lib/openstack]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/var/lib/openstack]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/groupadd docker'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Group[docker]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Group[docker]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Exec[directory-create-etc-my.cnf.d](provider=posix): Executing check 'test -d /etc/my.cnf.d'", > "Debug: Executing: 'test -d /etc/my.cnf.d'", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): sending command 'set' with params [\"/files/etc/my.cnf.d/tripleo.cnf/tripleo/bind-address\", \"172.17.1.10\"]", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): sending command 'rm' with params [\"/files/etc/my.cnf.d/tripleo.cnf/tripleo/ssl\"]", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): sending command 'rm' with params [\"/files/etc/my.cnf.d/tripleo.cnf/tripleo/ssl-ca\"]", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): Files changed, should execute", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Debug: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]: The container Class[Tripleo::Profile::Base::Database::Mysql::Client] will propagate my refresh event", > "Debug: Class[Tripleo::Profile::Base::Database::Mysql::Client]: The container Stage[main] will propagate my refresh event", > "Debug: Executing: '/usr/bin/systemctl is-active chronyd'", > "Debug: Executing: '/usr/bin/systemctl is-enabled chronyd'", > "Debug: Executing: '/usr/bin/systemctl stop chronyd'", > "Debug: Executing: '/usr/bin/systemctl disable chronyd'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]: The container Class[Tripleo::Profile::Base::Time::Ntp] will propagate my refresh event", > "Debug: Class[Tripleo::Profile::Base::Time::Ntp]: The container Stage[main] will propagate my refresh event", > "Debug: Prefetching norpm resources for package", > "Debug: Executing: '/usr/bin/rpm -q ntp --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Info: Computing checksum on file /etc/ntp.conf", > "Info: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]: Filebucketed /etc/ntp.conf to puppet with sum 913c85f0fde85f83c2d6c030ecf259e9", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}537f072fe8f462b20e5e88f9121550b2'", > "Debug: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]: The container Class[Ntp::Config] will propagate my refresh event", > "Debug: Class[Ntp::Config]: The container Stage[main] will propagate my refresh event", > "Info: Class[Ntp::Config]: Scheduling refresh of Class[Ntp::Service]", > "Info: Class[Ntp::Service]: Scheduling refresh of Service[ntp]", > "Debug: Executing: '/usr/bin/systemctl is-active ntpd'", > "Debug: Executing: '/usr/bin/systemctl is-enabled ntpd'", > "Debug: Executing: '/usr/bin/systemctl unmask ntpd'", > "Debug: Executing: '/usr/bin/systemctl start ntpd'", > "Debug: Executing: '/usr/bin/systemctl enable ntpd'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Debug: /Stage[main]/Ntp::Service/Service[ntp]: The container Class[Ntp::Service] will propagate my refresh event", > "Info: /Stage[main]/Ntp::Service/Service[ntp]: Unscheduling refresh on Service[ntp]", > "Debug: Class[Ntp::Service]: The container Stage[main] will propagate my refresh event", > "Debug: Executing: '/usr/bin/rpm -q pacemaker --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/rpm -q pcs --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/rpm -q fence-agents-all --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/rpm -q pacemaker-libs --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]: The container Class[Tripleo::Profile::Base::Pacemaker] will propagate my refresh event", > "Debug: Executing: '/usr/bin/rpm -q openssh-server --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Debug: /Stage[main]/Timezone/File[/etc/localtime]: The container Class[Timezone] will propagate my refresh event", > "Debug: Class[Timezone]: The container Stage[main] will propagate my refresh event", > "Debug: Executing: '/usr/bin/rpm -q iptables --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/systemctl is-active firewalld'", > "Debug: Executing: '/usr/bin/systemctl is-enabled firewalld'", > "Debug: Executing: '/usr/bin/rpm -q iptables-services --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/systemctl is-active iptables'", > "Debug: Executing: '/usr/bin/systemctl is-enabled iptables'", > "Debug: Executing: '/usr/bin/systemctl unmask iptables'", > "Debug: Executing: '/usr/bin/systemctl start iptables'", > "Debug: Executing: '/usr/bin/systemctl enable iptables'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]: The container Class[Firewall::Linux::Redhat] will propagate my refresh event", > "Info: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]: Unscheduling refresh on Service[iptables]", > "Debug: Executing: '/usr/bin/systemctl is-active ip6tables'", > "Debug: Executing: '/usr/bin/systemctl is-enabled ip6tables'", > "Debug: Executing: '/usr/bin/systemctl unmask ip6tables'", > "Debug: Executing: '/usr/bin/systemctl start ip6tables'", > "Debug: Executing: '/usr/bin/systemctl enable ip6tables'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]: The container Class[Firewall::Linux::Redhat] will propagate my refresh event", > "Info: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]: Unscheduling refresh on Service[ip6tables]", > "Debug: Executing: '/usr/bin/rpm -q tuned --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Exec[tuned-adm](provider=posix): Executing check 'tuned-adm active | grep -q '''", > "Debug: Executing: 'tuned-adm active | grep -q '''", > "Debug: Exec[modprobe nf_conntrack](provider=posix): Executing check 'egrep -q '^nf_conntrack ' /proc/modules'", > "Debug: Executing: 'egrep -q '^nf_conntrack ' /proc/modules'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]: The container Kmod::Load[nf_conntrack] will propagate my refresh event", > "Debug: Kmod::Load[nf_conntrack]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Debug: Exec[modprobe nf_conntrack_proto_sctp](provider=posix): Executing check 'egrep -q '^nf_conntrack_proto_sctp ' /proc/modules'", > "Debug: Executing: 'egrep -q '^nf_conntrack_proto_sctp ' /proc/modules'", > "Debug: Exec[modprobe nf_conntrack_proto_sctp](provider=posix): Executing 'modprobe nf_conntrack_proto_sctp'", > "Debug: Executing: 'modprobe nf_conntrack_proto_sctp'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]: The container Kmod::Load[nf_conntrack_proto_sctp] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]: The container Kmod::Load[nf_conntrack_proto_sctp] will propagate my refresh event", > "Debug: Kmod::Load[nf_conntrack_proto_sctp]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Debug: Prefetching parsed resources for sysctl", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Debug: Flushing sysctl provider target /etc/sysctl.conf", > "Info: Computing checksum on file /etc/sysctl.conf", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]: The container Sysctl::Value[fs.inotify.max_user_instances] will propagate my refresh event", > "Debug: Prefetching sysctl_runtime resources for sysctl_runtime", > "Debug: Executing: '/usr/sbin/sysctl -a'", > "Debug: Executing: '/usr/sbin/sysctl fs.inotify.max_user_instances=1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]: The container Sysctl::Value[fs.inotify.max_user_instances] will propagate my refresh event", > "Debug: Sysctl::Value[fs.inotify.max_user_instances]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]: The container Sysctl::Value[fs.suid_dumpable] will propagate my refresh event", > "Debug: Sysctl::Value[fs.suid_dumpable]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]: The container Sysctl::Value[kernel.dmesg_restrict] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl kernel.dmesg_restrict=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]: The container Sysctl::Value[kernel.dmesg_restrict] will propagate my refresh event", > "Debug: Sysctl::Value[kernel.dmesg_restrict]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]: The container Sysctl::Value[kernel.pid_max] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl kernel.pid_max=1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]: The container Sysctl::Value[kernel.pid_max] will propagate my refresh event", > "Debug: Sysctl::Value[kernel.pid_max]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]: The container Sysctl::Value[net.core.netdev_max_backlog] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.core.netdev_max_backlog=10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]: The container Sysctl::Value[net.core.netdev_max_backlog] will propagate my refresh event", > "Debug: Sysctl::Value[net.core.netdev_max_backlog]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]: The container Sysctl::Value[net.ipv4.conf.all.arp_accept] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.all.arp_accept=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]: The container Sysctl::Value[net.ipv4.conf.all.arp_accept] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.all.arp_accept]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]: The container Sysctl::Value[net.ipv4.conf.all.log_martians] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.all.log_martians=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]: The container Sysctl::Value[net.ipv4.conf.all.log_martians] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.all.log_martians]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]: The container Sysctl::Value[net.ipv4.conf.all.secure_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.all.secure_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]: The container Sysctl::Value[net.ipv4.conf.all.secure_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.all.secure_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]: The container Sysctl::Value[net.ipv4.conf.all.send_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.all.send_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]: The container Sysctl::Value[net.ipv4.conf.all.send_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.all.send_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]: The container Sysctl::Value[net.ipv4.conf.default.accept_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.default.accept_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]: The container Sysctl::Value[net.ipv4.conf.default.accept_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.default.accept_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]: The container Sysctl::Value[net.ipv4.conf.default.log_martians] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.default.log_martians=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]: The container Sysctl::Value[net.ipv4.conf.default.log_martians] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.default.log_martians]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]: The container Sysctl::Value[net.ipv4.conf.default.secure_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.default.secure_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]: The container Sysctl::Value[net.ipv4.conf.default.secure_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.default.secure_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]: The container Sysctl::Value[net.ipv4.conf.default.send_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.default.send_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]: The container Sysctl::Value[net.ipv4.conf.default.send_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.default.send_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[net.ipv4.ip_forward]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[net.ipv4.ip_forward]: The container Sysctl::Value[net.ipv4.ip_forward] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.ip_forward=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/Sysctl_runtime[net.ipv4.ip_forward]/val: val changed '0' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/Sysctl_runtime[net.ipv4.ip_forward]: The container Sysctl::Value[net.ipv4.ip_forward] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.ip_forward]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Debug: Executing: '/usr/bin/rpm -q docker --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/ensure: defined content as '{md5}b984426de0b5978853686a649b64e4b8'", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]: Scheduling refresh of Exec[systemd daemon-reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Exec[systemd daemon-reload](provider=posix): Executing 'systemctl daemon-reload'", > "Debug: Executing: 'systemctl daemon-reload'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Exec[systemd daemon-reload]: Triggered 'refresh' from 1 events", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Exec[systemd daemon-reload]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Exec[systemd daemon-reload]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): sending command 'set' with params [\"/files/etc/sysconfig/docker/OPTIONS\", \"\\\"--log-driver=journald --signature-verification=false --iptables=false --live-restore -H unix:///run/docker.sock -H unix:///var/lib/openstack/docker.sock\\\"\"]", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): Files changed, should execute", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]/returns: executed successfully", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): sending command 'set' with params [\"/files/etc/sysconfig/docker/INSECURE_REGISTRY\", \"\\\"--insecure-registry 192.168.24.1:8787\\\"\"]", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): Files changed, should execute", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]/returns: executed successfully", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): sending command 'rm' with params [\"/files/etc/docker/daemon.json/dict/entry[. = \\\"registry-mirrors\\\"]\"]", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): Skipping because no files were changed", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): Closed the augeas connection", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): sending command 'set' with params [\"/files/etc/docker/daemon.json/dict/entry[. = \\\"debug\\\"]\", \"debug\"]", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): sending command 'set' with params [\"/files/etc/docker/daemon.json/dict/entry[. = \\\"debug\\\"]/const\", \"true\"]", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): Files changed, should execute", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]/returns: executed successfully", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): sending command 'set' with params [\"/files/etc/sysconfig/docker-storage/DOCKER_STORAGE_OPTIONS\", \"\\\" -s overlay2\\\"\"]", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): Files changed, should execute", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]/returns: executed successfully", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): sending command 'set' with params [\"/files/etc/sysconfig/docker-network/DOCKER_NETWORK_OPTIONS\", \"\\\" --bip=172.31.0.1/24\\\"\"]", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): Files changed, should execute", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]/returns: executed successfully", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Executing: '/usr/bin/systemctl is-active docker'", > "Debug: Executing: '/usr/bin/systemctl is-enabled docker'", > "Debug: Executing: '/usr/bin/systemctl unmask docker'", > "Debug: Executing: '/usr/bin/systemctl start docker'", > "Debug: Executing: '/usr/bin/systemctl enable docker'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]/ensure: ensure changed 'stopped' to 'running'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]: Unscheduling refresh on Service[docker]", > "Debug: Class[Tripleo::Profile::Base::Docker]: The container Stage[main] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh1] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.neigh.default.gc_thresh1=1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh1] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh2] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.neigh.default.gc_thresh2=2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh2] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh3] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.neigh.default.gc_thresh3=4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh3] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]: The container Sysctl::Value[net.ipv4.tcp_keepalive_intvl] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.tcp_keepalive_intvl=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]: The container Sysctl::Value[net.ipv4.tcp_keepalive_intvl] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.tcp_keepalive_intvl]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]: The container Sysctl::Value[net.ipv4.tcp_keepalive_probes] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.tcp_keepalive_probes=5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]: The container Sysctl::Value[net.ipv4.tcp_keepalive_probes] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.tcp_keepalive_probes]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]: The container Sysctl::Value[net.ipv4.tcp_keepalive_time] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.tcp_keepalive_time=5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]: The container Sysctl::Value[net.ipv4.tcp_keepalive_time] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.tcp_keepalive_time]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]: The container Sysctl::Value[net.ipv6.conf.all.accept_ra] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.all.accept_ra=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]: The container Sysctl::Value[net.ipv6.conf.all.accept_ra] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.all.accept_ra]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]: The container Sysctl::Value[net.ipv6.conf.all.accept_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.all.accept_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]: The container Sysctl::Value[net.ipv6.conf.all.accept_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.all.accept_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]: The container Sysctl::Value[net.ipv6.conf.all.autoconf] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.all.autoconf=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]: The container Sysctl::Value[net.ipv6.conf.all.autoconf] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.all.autoconf]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]: The container Sysctl::Value[net.ipv6.conf.all.disable_ipv6] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.all.disable_ipv6]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]: The container Sysctl::Value[net.ipv6.conf.default.accept_ra] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.default.accept_ra=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]: The container Sysctl::Value[net.ipv6.conf.default.accept_ra] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.default.accept_ra]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]: The container Sysctl::Value[net.ipv6.conf.default.accept_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.default.accept_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]: The container Sysctl::Value[net.ipv6.conf.default.accept_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.default.accept_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]: The container Sysctl::Value[net.ipv6.conf.default.autoconf] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.default.autoconf=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]: The container Sysctl::Value[net.ipv6.conf.default.autoconf] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.default.autoconf]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]: The container Sysctl::Value[net.ipv6.conf.default.disable_ipv6] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.default.disable_ipv6]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]: The container Sysctl::Value[net.ipv6.conf.lo.disable_ipv6] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]: The container Sysctl::Value[net.netfilter.nf_conntrack_max] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.netfilter.nf_conntrack_max=500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]: The container Sysctl::Value[net.netfilter.nf_conntrack_max] will propagate my refresh event", > "Debug: Sysctl::Value[net.netfilter.nf_conntrack_max]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]: The container Sysctl::Value[net.nf_conntrack_max] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.nf_conntrack_max=500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]: The container Sysctl::Value[net.nf_conntrack_max] will propagate my refresh event", > "Debug: Sysctl::Value[net.nf_conntrack_max]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Debug: Class[Tripleo::Profile::Base::Kernel]: The container Stage[main] will propagate my refresh event", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/mode: Not managing symlink mode", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Info: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Scheduling refresh of Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: The container Systemd::Unit_file[docker.service] will propagate my refresh event", > "Debug: Systemd::Unit_file[docker.service]: The container Class[Tripleo::Profile::Base::Pacemaker] will propagate my refresh event", > "Info: Systemd::Unit_file[docker.service]: Scheduling refresh of Class[Systemd::Systemctl::Daemon_reload]", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: The container Stage[main] will propagate my refresh event", > "Debug: Executing: '/usr/bin/systemctl is-active pcsd'", > "Debug: Executing: '/usr/bin/systemctl is-enabled pcsd'", > "Debug: Executing: '/usr/bin/systemctl unmask pcsd'", > "Debug: Executing: '/usr/bin/systemctl start pcsd'", > "Debug: Executing: '/usr/bin/systemctl enable pcsd'", > "Notice: /Stage[main]/Pacemaker::Service/Service[pcsd]/ensure: ensure changed 'stopped' to 'running'", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: The container Class[Pacemaker::Service] will propagate my refresh event", > "Info: /Stage[main]/Pacemaker::Service/Service[pcsd]: Unscheduling refresh on Service[pcsd]", > "Debug: Executing: '/usr/sbin/usermod -p $6$hqE1CAROWf$WB.qvxLHBoiPIMdoggkzKV4TduNY5xgv42u.jlfyllWdewzqzLz5l4ukqDwI/V8mdqp6dT1SSqP1DmtuxRotS0 hacluster'", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/password: changed password", > "Debug: Executing: '/usr/sbin/usermod -G haclient hacluster'", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/groups: groups changed '' to ['haclient']", > "Info: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Scheduling refresh of Exec[reauthenticate-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/returns: Exec try 1/360", > "Debug: Exec[reauthenticate-across-all-nodes](provider=posix): Executing '/sbin/pcs cluster auth controller-0 controller-1 controller-2 -u hacluster -p a27rypXMwVPVqWHT --force'", > "Debug: Executing: '/sbin/pcs cluster auth controller-0 controller-1 controller-2 -u hacluster -p a27rypXMwVPVqWHT --force'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/returns: Sleeping for 10.0 seconds between tries", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/returns: Exec try 2/360", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Triggered 'refresh' from 2 events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: Exec[Create Cluster tripleo_cluster](provider=posix): Executing check '/usr/bin/test -f /etc/corosync/corosync.conf'", > "Debug: Executing: '/usr/bin/test -f /etc/corosync/corosync.conf'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/returns: Exec try 1/10", > "Debug: Exec[Create Cluster tripleo_cluster](provider=posix): Executing '/sbin/pcs cluster setup --wait --name tripleo_cluster controller-0 controller-1 controller-2 --token 10000 --encryption 1'", > "Debug: Executing: '/sbin/pcs cluster setup --wait --name tripleo_cluster controller-0 controller-1 controller-2 --token 10000 --encryption 1'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/returns: executed successfully", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/owner: owner changed 'root' to 'hacluster'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/group: group changed 'root' to 'haclient'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/mode: mode changed '0755' to '0750'", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Info: Computing checksum on file /etc/pacemaker/authkey", > "Info: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: Filebucketed /etc/pacemaker/authkey to puppet with sum 3de5211976d73e9333cc7ebc4f25be20", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/content: content changed '{md5}3de5211976d73e9333cc7ebc4f25be20' to '{md5}0935666a8d0f9bd85e683dd1382bd797'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/mode: mode changed '0400' to '0640'", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: Exec[Start Cluster tripleo_cluster](provider=posix): Executing check '/sbin/pcs status >/dev/null 2>&1'", > "Debug: Executing: '/sbin/pcs status >/dev/null 2>&1'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/returns: Exec try 1/10", > "Debug: Exec[Start Cluster tripleo_cluster](provider=posix): Executing '/sbin/pcs cluster start --all'", > "Debug: Executing: '/sbin/pcs cluster start --all'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/returns: executed successfully", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: Executing: '/usr/bin/systemctl is-enabled corosync'", > "Debug: Executing: '/usr/bin/systemctl unmask corosync'", > "Debug: Executing: '/usr/bin/systemctl enable corosync'", > "Notice: /Stage[main]/Pacemaker::Service/Service[corosync]/enable: enable changed 'false' to 'true'", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: The container Class[Pacemaker::Service] will propagate my refresh event", > "Debug: Executing: '/usr/bin/systemctl is-enabled pacemaker'", > "Debug: Executing: '/usr/bin/systemctl unmask pacemaker'", > "Debug: Executing: '/usr/bin/systemctl enable pacemaker'", > "Notice: /Stage[main]/Pacemaker::Service/Service[pacemaker]/enable: enable changed 'false' to 'true'", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: The container Class[Pacemaker::Service] will propagate my refresh event", > "Debug: Class[Pacemaker::Service]: The container Stage[main] will propagate my refresh event", > "Debug: Exec[wait-for-settle](provider=posix): Executing check '/sbin/pcs status | grep -q 'partition with quorum' > /dev/null 2>&1'", > "Debug: Executing: '/sbin/pcs status | grep -q 'partition with quorum' > /dev/null 2>&1'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 1/360", > "Debug: Exec[wait-for-settle](provider=posix): Executing '/sbin/pcs status | grep -q 'partition with quorum' > /dev/null 2>&1'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Sleeping for 10.0 seconds between tries", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 2/360", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 3/360", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 4/360", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: executed successfully", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: Class[Pacemaker::Corosync]: The container Stage[main] will propagate my refresh event", > "Info: Class[Systemd::Systemctl::Daemon_reload]: Scheduling refresh of Exec[systemctl-daemon-reload]", > "Debug: Exec[systemctl-daemon-reload](provider=posix): Executing 'systemctl daemon-reload'", > "Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: The container Class[Systemd::Systemctl::Daemon_reload] will propagate my refresh event", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: The container Stage[main] will propagate my refresh event", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: The container Class[Systemd] will propagate my refresh event", > "Debug: Class[Systemd]: The container Stage[main] will propagate my refresh event", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180816-18479-1v2xckh returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180816-18479-1v2xckh property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: property exists: property show | grep stonith-enabled | grep false > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180816-18479-2i4pjz returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180816-18479-2i4pjz property set stonith-enabled=false", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180816-18479-2i4pjz diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180816-18479-2i4pjz.orig returned 0 -> CIB updated", > "Debug: property create: property set stonith-enabled=false -> ", > "Notice: /Stage[main]/Pacemaker::Stonith/Pacemaker::Property[Disable STONITH]/Pcmk_property[property--stonith-enabled]/ensure: created", > "Debug: /Stage[main]/Pacemaker::Stonith/Pacemaker::Property[Disable STONITH]/Pcmk_property[property--stonith-enabled]: The container Pacemaker::Property[Disable STONITH] will propagate my refresh event", > "Debug: Pacemaker::Property[Disable STONITH]: The container Class[Pacemaker::Stonith] will propagate my refresh event", > "Debug: Class[Pacemaker::Stonith]: The container Stage[main] will propagate my refresh event", > "Info: Computing checksum on file /etc/ssh/sshd_config", > "Info: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]: Filebucketed /etc/ssh/sshd_config to puppet with sum 781dbef6518331ceaa1de16137f5328c", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}781dbef6518331ceaa1de16137f5328c' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Debug: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]: The container Concat[/etc/ssh/sshd_config] will propagate my refresh event", > "Debug: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]: The container /etc/ssh/sshd_config will propagate my refresh event", > "Debug: /etc/ssh/sshd_config: The container Concat[/etc/ssh/sshd_config] will propagate my refresh event", > "Debug: Concat[/etc/ssh/sshd_config]: The container Class[Ssh::Server::Config] will propagate my refresh event", > "Info: Concat[/etc/ssh/sshd_config]: Scheduling refresh of Service[sshd]", > "Debug: Class[Ssh::Server::Config]: The container Stage[main] will propagate my refresh event", > "Info: Class[Ssh::Server::Config]: Scheduling refresh of Class[Ssh::Server::Service]", > "Info: Class[Ssh::Server::Service]: Scheduling refresh of Service[sshd]", > "Debug: Executing: '/usr/bin/systemctl is-active sshd'", > "Debug: Executing: '/usr/bin/systemctl is-enabled sshd'", > "Debug: Executing: '/usr/bin/systemctl restart sshd'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Debug: /Stage[main]/Ssh::Server::Service/Service[sshd]: The container Class[Ssh::Server::Service] will propagate my refresh event", > "Debug: Class[Ssh::Server::Service]: The container Stage[main] will propagate my refresh event", > "Debug: Prefetching iptables resources for firewall", > "Debug: Puppet::Type::Firewall::ProviderIptables: [prefetch(resources)]", > "Debug: Puppet::Type::Firewall::ProviderIptables: [instances]", > "Debug: Executing: '/usr/sbin/iptables-save'", > "Debug: Firewall[000 accept related established rules ipv4](provider=iptables): Inserting rule 000 accept related established rules ipv4", > "Debug: Firewall[000 accept related established rules ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[000 accept related established rules ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 1 --wait -t filter -p all -m state --state ESTABLISHED,RELATED -j ACCEPT -m comment --comment 000 accept related established rules ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Debug: Firewall[000 accept related established rules ipv4](provider=iptables): [flush]", > "Debug: Firewall[000 accept related established rules ipv4](provider=iptables): [persist_iptables]", > "Debug: Executing: '/usr/libexec/iptables/iptables.init save'", > "Debug: /Firewall[000 accept related established rules ipv4]: The container Tripleo::Firewall::Rule[000 accept related established rules] will propagate my refresh event", > "Debug: Prefetching ip6tables resources for firewall", > "Debug: Puppet::Type::Firewall::ProviderIp6tables: [prefetch(resources)]", > "Debug: Puppet::Type::Firewall::ProviderIp6tables: [instances]", > "Debug: Executing: '/usr/sbin/ip6tables-save'", > "Debug: Firewall[000 accept related established rules ipv6](provider=ip6tables): Inserting rule 000 accept related established rules ipv6", > "Debug: Firewall[000 accept related established rules ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[000 accept related established rules ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 1 --wait -t filter -p all -m state --state ESTABLISHED,RELATED -j ACCEPT -m comment --comment 000 accept related established rules ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Debug: Firewall[000 accept related established rules ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[000 accept related established rules ipv6](provider=ip6tables): [persist_iptables]", > "Debug: Executing: '/usr/libexec/iptables/ip6tables.init save'", > "Debug: /Firewall[000 accept related established rules ipv6]: The container Tripleo::Firewall::Rule[000 accept related established rules] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[000 accept related established rules]: The container Class[Tripleo::Firewall::Pre] will propagate my refresh event", > "Debug: Firewall[001 accept all icmp ipv4](provider=iptables): Inserting rule 001 accept all icmp ipv4", > "Debug: Firewall[001 accept all icmp ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[001 accept all icmp ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 2 --wait -t filter -p icmp -m state --state NEW -j ACCEPT -m comment --comment 001 accept all icmp ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Debug: Firewall[001 accept all icmp ipv4](provider=iptables): [flush]", > "Debug: Firewall[001 accept all icmp ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: The container Tripleo::Firewall::Rule[001 accept all icmp] will propagate my refresh event", > "Debug: Firewall[001 accept all icmp ipv6](provider=ip6tables): Inserting rule 001 accept all icmp ipv6", > "Debug: Firewall[001 accept all icmp ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[001 accept all icmp ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 2 --wait -t filter -p ipv6-icmp -m state --state NEW -j ACCEPT -m comment --comment 001 accept all icmp ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Debug: Firewall[001 accept all icmp ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[001 accept all icmp ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: The container Tripleo::Firewall::Rule[001 accept all icmp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[001 accept all icmp]: The container Class[Tripleo::Firewall::Pre] will propagate my refresh event", > "Debug: Firewall[002 accept all to lo interface ipv4](provider=iptables): Inserting rule 002 accept all to lo interface ipv4", > "Debug: Firewall[002 accept all to lo interface ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[002 accept all to lo interface ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 3 --wait -t filter -i lo -p all -m state --state NEW -j ACCEPT -m comment --comment 002 accept all to lo interface ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Debug: Firewall[002 accept all to lo interface ipv4](provider=iptables): [flush]", > "Debug: Firewall[002 accept all to lo interface ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: The container Tripleo::Firewall::Rule[002 accept all to lo interface] will propagate my refresh event", > "Debug: Firewall[002 accept all to lo interface ipv6](provider=ip6tables): Inserting rule 002 accept all to lo interface ipv6", > "Debug: Firewall[002 accept all to lo interface ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[002 accept all to lo interface ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 3 --wait -t filter -i lo -p all -m state --state NEW -j ACCEPT -m comment --comment 002 accept all to lo interface ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Debug: Firewall[002 accept all to lo interface ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[002 accept all to lo interface ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: The container Tripleo::Firewall::Rule[002 accept all to lo interface] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[002 accept all to lo interface]: The container Class[Tripleo::Firewall::Pre] will propagate my refresh event", > "Debug: Firewall[003 accept ssh ipv4](provider=iptables): Inserting rule 003 accept ssh ipv4", > "Debug: Firewall[003 accept ssh ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[003 accept ssh ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 4 --wait -t filter -p tcp -m multiport --dports 22 -m state --state NEW -j ACCEPT -m comment --comment 003 accept ssh ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Debug: Firewall[003 accept ssh ipv4](provider=iptables): [flush]", > "Debug: Firewall[003 accept ssh ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: The container Tripleo::Firewall::Rule[003 accept ssh] will propagate my refresh event", > "Debug: Firewall[003 accept ssh ipv6](provider=ip6tables): Inserting rule 003 accept ssh ipv6", > "Debug: Firewall[003 accept ssh ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[003 accept ssh ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 4 --wait -t filter -p tcp -m multiport --dports 22 -m state --state NEW -j ACCEPT -m comment --comment 003 accept ssh ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Debug: Firewall[003 accept ssh ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[003 accept ssh ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: The container Tripleo::Firewall::Rule[003 accept ssh] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[003 accept ssh]: The container Class[Tripleo::Firewall::Pre] will propagate my refresh event", > "Debug: Firewall[004 accept ipv6 dhcpv6 ipv6](provider=ip6tables): Inserting rule 004 accept ipv6 dhcpv6 ipv6", > "Debug: Firewall[004 accept ipv6 dhcpv6 ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[004 accept ipv6 dhcpv6 ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 5 --wait -t filter -d fe80::/64 -p udp -m multiport --dports 546 -m state --state NEW -j ACCEPT -m comment --comment 004 accept ipv6 dhcpv6 ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Debug: Firewall[004 accept ipv6 dhcpv6 ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[004 accept ipv6 dhcpv6 ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: The container Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]: The container Class[Tripleo::Firewall::Pre] will propagate my refresh event", > "Debug: Class[Tripleo::Firewall::Pre]: The container Stage[main] will propagate my refresh event", > "Debug: Firewall[998 log all ipv4](provider=iptables): Inserting rule 998 log all ipv4", > "Debug: Firewall[998 log all ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[998 log all ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p all -m state --state NEW -j LOG -m comment --comment 998 log all ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Debug: Firewall[998 log all ipv4](provider=iptables): [flush]", > "Debug: Firewall[998 log all ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[998 log all ipv4]: The container Tripleo::Firewall::Rule[998 log all] will propagate my refresh event", > "Debug: Firewall[998 log all ipv6](provider=ip6tables): Inserting rule 998 log all ipv6", > "Debug: Firewall[998 log all ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[998 log all ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p all -m state --state NEW -j LOG -m comment --comment 998 log all ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Debug: Firewall[998 log all ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[998 log all ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[998 log all ipv6]: The container Tripleo::Firewall::Rule[998 log all] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[998 log all]: The container Class[Tripleo::Firewall::Post] will propagate my refresh event", > "Debug: Firewall[999 drop all ipv4](provider=iptables): Inserting rule 999 drop all ipv4", > "Debug: Firewall[999 drop all ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[999 drop all ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p all -m state --state NEW -j DROP -m comment --comment 999 drop all ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Debug: Firewall[999 drop all ipv4](provider=iptables): [flush]", > "Debug: Firewall[999 drop all ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[999 drop all ipv4]: The container Tripleo::Firewall::Rule[999 drop all] will propagate my refresh event", > "Debug: Firewall[999 drop all ipv6](provider=ip6tables): Inserting rule 999 drop all ipv6", > "Debug: Firewall[999 drop all ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[999 drop all ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p all -m state --state NEW -j DROP -m comment --comment 999 drop all ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Debug: Firewall[999 drop all ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[999 drop all ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[999 drop all ipv6]: The container Tripleo::Firewall::Rule[999 drop all] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[999 drop all]: The container Class[Tripleo::Firewall::Post] will propagate my refresh event", > "Debug: Class[Tripleo::Firewall::Post]: The container Stage[main] will propagate my refresh event", > "Debug: Firewall[128 aodh-api ipv4](provider=iptables): Inserting rule 128 aodh-api ipv4", > "Debug: Firewall[128 aodh-api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[128 aodh-api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 8042,13042 -m state --state NEW -j ACCEPT -m comment --comment 128 aodh-api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/ensure: created", > "Debug: Firewall[128 aodh-api ipv4](provider=iptables): [flush]", > "Debug: Firewall[128 aodh-api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: The container Tripleo::Firewall::Rule[128 aodh-api] will propagate my refresh event", > "Debug: Firewall[128 aodh-api ipv6](provider=ip6tables): Inserting rule 128 aodh-api ipv6", > "Debug: Firewall[128 aodh-api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[128 aodh-api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 8042,13042 -m state --state NEW -j ACCEPT -m comment --comment 128 aodh-api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/ensure: created", > "Debug: Firewall[128 aodh-api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[128 aodh-api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: The container Tripleo::Firewall::Rule[128 aodh-api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[128 aodh-api]: The container Tripleo::Firewall::Service_rules[aodh_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[aodh_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[113 ceph_mgr ipv4](provider=iptables): Inserting rule 113 ceph_mgr ipv4", > "Debug: Firewall[113 ceph_mgr ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[113 ceph_mgr ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 6800:7300 -m state --state NEW -j ACCEPT -m comment --comment 113 ceph_mgr ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/ensure: created", > "Debug: Firewall[113 ceph_mgr ipv4](provider=iptables): [flush]", > "Debug: Firewall[113 ceph_mgr ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: The container Tripleo::Firewall::Rule[113 ceph_mgr] will propagate my refresh event", > "Debug: Firewall[113 ceph_mgr ipv6](provider=ip6tables): Inserting rule 113 ceph_mgr ipv6", > "Debug: Firewall[113 ceph_mgr ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[113 ceph_mgr ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 6800:7300 -m state --state NEW -j ACCEPT -m comment --comment 113 ceph_mgr ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/ensure: created", > "Debug: Firewall[113 ceph_mgr ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[113 ceph_mgr ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: The container Tripleo::Firewall::Rule[113 ceph_mgr] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[113 ceph_mgr]: The container Tripleo::Firewall::Service_rules[ceph_mgr] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[ceph_mgr]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[110 ceph_mon ipv4](provider=iptables): Inserting rule 110 ceph_mon ipv4", > "Debug: Firewall[110 ceph_mon ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[110 ceph_mon ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 6789 -m state --state NEW -j ACCEPT -m comment --comment 110 ceph_mon ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/ensure: created", > "Debug: Firewall[110 ceph_mon ipv4](provider=iptables): [flush]", > "Debug: Firewall[110 ceph_mon ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: The container Tripleo::Firewall::Rule[110 ceph_mon] will propagate my refresh event", > "Debug: Firewall[110 ceph_mon ipv6](provider=ip6tables): Inserting rule 110 ceph_mon ipv6", > "Debug: Firewall[110 ceph_mon ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[110 ceph_mon ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 6789 -m state --state NEW -j ACCEPT -m comment --comment 110 ceph_mon ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/ensure: created", > "Debug: Firewall[110 ceph_mon ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[110 ceph_mon ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: The container Tripleo::Firewall::Rule[110 ceph_mon] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[110 ceph_mon]: The container Tripleo::Firewall::Service_rules[ceph_mon] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[ceph_mon]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[119 cinder ipv4](provider=iptables): Inserting rule 119 cinder ipv4", > "Debug: Firewall[119 cinder ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[119 cinder ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 8776,13776 -m state --state NEW -j ACCEPT -m comment --comment 119 cinder ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/ensure: created", > "Debug: Firewall[119 cinder ipv4](provider=iptables): [flush]", > "Debug: Firewall[119 cinder ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[119 cinder ipv4]: The container Tripleo::Firewall::Rule[119 cinder] will propagate my refresh event", > "Debug: Firewall[119 cinder ipv6](provider=ip6tables): Inserting rule 119 cinder ipv6", > "Debug: Firewall[119 cinder ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[119 cinder ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 8776,13776 -m state --state NEW -j ACCEPT -m comment --comment 119 cinder ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/ensure: created", > "Debug: Firewall[119 cinder ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[119 cinder ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[119 cinder ipv6]: The container Tripleo::Firewall::Rule[119 cinder] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[119 cinder]: The container Tripleo::Firewall::Service_rules[cinder_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[cinder_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[120 iscsi initiator ipv4](provider=iptables): Inserting rule 120 iscsi initiator ipv4", > "Debug: Firewall[120 iscsi initiator ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[120 iscsi initiator ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 3260 -m state --state NEW -j ACCEPT -m comment --comment 120 iscsi initiator ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/ensure: created", > "Debug: Firewall[120 iscsi initiator ipv4](provider=iptables): [flush]", > "Debug: Firewall[120 iscsi initiator ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: The container Tripleo::Firewall::Rule[120 iscsi initiator] will propagate my refresh event", > "Debug: Firewall[120 iscsi initiator ipv6](provider=ip6tables): Inserting rule 120 iscsi initiator ipv6", > "Debug: Firewall[120 iscsi initiator ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[120 iscsi initiator ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 3260 -m state --state NEW -j ACCEPT -m comment --comment 120 iscsi initiator ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/ensure: created", > "Debug: Firewall[120 iscsi initiator ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[120 iscsi initiator ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: The container Tripleo::Firewall::Rule[120 iscsi initiator] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[120 iscsi initiator]: The container Tripleo::Firewall::Service_rules[cinder_volume] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[cinder_volume]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[112 glance_api ipv4](provider=iptables): Inserting rule 112 glance_api ipv4", > "Debug: Firewall[112 glance_api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[112 glance_api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 9292,13292 -m state --state NEW -j ACCEPT -m comment --comment 112 glance_api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/ensure: created", > "Debug: Firewall[112 glance_api ipv4](provider=iptables): [flush]", > "Debug: Firewall[112 glance_api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[112 glance_api ipv4]: The container Tripleo::Firewall::Rule[112 glance_api] will propagate my refresh event", > "Debug: Firewall[112 glance_api ipv6](provider=ip6tables): Inserting rule 112 glance_api ipv6", > "Debug: Firewall[112 glance_api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[112 glance_api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 9292,13292 -m state --state NEW -j ACCEPT -m comment --comment 112 glance_api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/ensure: created", > "Debug: Firewall[112 glance_api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[112 glance_api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[112 glance_api ipv6]: The container Tripleo::Firewall::Rule[112 glance_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[112 glance_api]: The container Tripleo::Firewall::Service_rules[glance_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[glance_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[129 gnocchi-api ipv4](provider=iptables): Inserting rule 129 gnocchi-api ipv4", > "Debug: Firewall[129 gnocchi-api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[129 gnocchi-api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 8041,13041 -m state --state NEW -j ACCEPT -m comment --comment 129 gnocchi-api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/ensure: created", > "Debug: Firewall[129 gnocchi-api ipv4](provider=iptables): [flush]", > "Debug: Firewall[129 gnocchi-api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: The container Tripleo::Firewall::Rule[129 gnocchi-api] will propagate my refresh event", > "Debug: Firewall[129 gnocchi-api ipv6](provider=ip6tables): Inserting rule 129 gnocchi-api ipv6", > "Debug: Firewall[129 gnocchi-api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[129 gnocchi-api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8041,13041 -m state --state NEW -j ACCEPT -m comment --comment 129 gnocchi-api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/ensure: created", > "Debug: Firewall[129 gnocchi-api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[129 gnocchi-api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: The container Tripleo::Firewall::Rule[129 gnocchi-api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[129 gnocchi-api]: The container Tripleo::Firewall::Service_rules[gnocchi_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[140 gnocchi-statsd ipv4](provider=iptables): Inserting rule 140 gnocchi-statsd ipv4", > "Debug: Firewall[140 gnocchi-statsd ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[140 gnocchi-statsd ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p udp -m multiport --dports 8125 -m state --state NEW -j ACCEPT -m comment --comment 140 gnocchi-statsd ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/ensure: created", > "Debug: Firewall[140 gnocchi-statsd ipv4](provider=iptables): [flush]", > "Debug: Firewall[140 gnocchi-statsd ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: The container Tripleo::Firewall::Rule[140 gnocchi-statsd] will propagate my refresh event", > "Debug: Firewall[140 gnocchi-statsd ipv6](provider=ip6tables): Inserting rule 140 gnocchi-statsd ipv6", > "Debug: Firewall[140 gnocchi-statsd ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[140 gnocchi-statsd ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p udp -m multiport --dports 8125 -m state --state NEW -j ACCEPT -m comment --comment 140 gnocchi-statsd ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/ensure: created", > "Debug: Firewall[140 gnocchi-statsd ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[140 gnocchi-statsd ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: The container Tripleo::Firewall::Rule[140 gnocchi-statsd] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[140 gnocchi-statsd]: The container Tripleo::Firewall::Service_rules[gnocchi_statsd] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_statsd]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[107 haproxy stats ipv4](provider=iptables): Inserting rule 107 haproxy stats ipv4", > "Debug: Firewall[107 haproxy stats ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[107 haproxy stats ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 1993 -m state --state NEW -j ACCEPT -m comment --comment 107 haproxy stats ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/ensure: created", > "Debug: Firewall[107 haproxy stats ipv4](provider=iptables): [flush]", > "Debug: Firewall[107 haproxy stats ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: The container Tripleo::Firewall::Rule[107 haproxy stats] will propagate my refresh event", > "Debug: Firewall[107 haproxy stats ipv6](provider=ip6tables): Inserting rule 107 haproxy stats ipv6", > "Debug: Firewall[107 haproxy stats ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[107 haproxy stats ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 1993 -m state --state NEW -j ACCEPT -m comment --comment 107 haproxy stats ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/ensure: created", > "Debug: Firewall[107 haproxy stats ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[107 haproxy stats ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: The container Tripleo::Firewall::Rule[107 haproxy stats] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[107 haproxy stats]: The container Tripleo::Firewall::Service_rules[haproxy] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[haproxy]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[125 heat_api ipv4](provider=iptables): Inserting rule 125 heat_api ipv4", > "Debug: Firewall[125 heat_api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[125 heat_api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 8004,13004 -m state --state NEW -j ACCEPT -m comment --comment 125 heat_api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/ensure: created", > "Debug: Firewall[125 heat_api ipv4](provider=iptables): [flush]", > "Debug: Firewall[125 heat_api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[125 heat_api ipv4]: The container Tripleo::Firewall::Rule[125 heat_api] will propagate my refresh event", > "Debug: Firewall[125 heat_api ipv6](provider=ip6tables): Inserting rule 125 heat_api ipv6", > "Debug: Firewall[125 heat_api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[125 heat_api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8004,13004 -m state --state NEW -j ACCEPT -m comment --comment 125 heat_api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/ensure: created", > "Debug: Firewall[125 heat_api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[125 heat_api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[125 heat_api ipv6]: The container Tripleo::Firewall::Rule[125 heat_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[125 heat_api]: The container Tripleo::Firewall::Service_rules[heat_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[heat_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[125 heat_cfn ipv4](provider=iptables): Inserting rule 125 heat_cfn ipv4", > "Debug: Firewall[125 heat_cfn ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[125 heat_cfn ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8000,13800 -m state --state NEW -j ACCEPT -m comment --comment 125 heat_cfn ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/ensure: created", > "Debug: Firewall[125 heat_cfn ipv4](provider=iptables): [flush]", > "Debug: Firewall[125 heat_cfn ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: The container Tripleo::Firewall::Rule[125 heat_cfn] will propagate my refresh event", > "Debug: Firewall[125 heat_cfn ipv6](provider=ip6tables): Inserting rule 125 heat_cfn ipv6", > "Debug: Firewall[125 heat_cfn ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[125 heat_cfn ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 8000,13800 -m state --state NEW -j ACCEPT -m comment --comment 125 heat_cfn ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/ensure: created", > "Debug: Firewall[125 heat_cfn ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[125 heat_cfn ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: The container Tripleo::Firewall::Rule[125 heat_cfn] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[125 heat_cfn]: The container Tripleo::Firewall::Service_rules[heat_api_cfn] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cfn]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[127 horizon ipv4](provider=iptables): Inserting rule 127 horizon ipv4", > "Debug: Firewall[127 horizon ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[127 horizon ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 80,443 -m state --state NEW -j ACCEPT -m comment --comment 127 horizon ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/ensure: created", > "Debug: Firewall[127 horizon ipv4](provider=iptables): [flush]", > "Debug: Firewall[127 horizon ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[127 horizon ipv4]: The container Tripleo::Firewall::Rule[127 horizon] will propagate my refresh event", > "Debug: Firewall[127 horizon ipv6](provider=ip6tables): Inserting rule 127 horizon ipv6", > "Debug: Firewall[127 horizon ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[127 horizon ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 14 --wait -t filter -p tcp -m multiport --dports 80,443 -m state --state NEW -j ACCEPT -m comment --comment 127 horizon ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/ensure: created", > "Debug: Firewall[127 horizon ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[127 horizon ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[127 horizon ipv6]: The container Tripleo::Firewall::Rule[127 horizon] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[127 horizon]: The container Tripleo::Firewall::Service_rules[horizon] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[horizon]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[111 keystone ipv4](provider=iptables): Inserting rule 111 keystone ipv4", > "Debug: Firewall[111 keystone ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[111 keystone ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 5000,13000,35357 -m state --state NEW -j ACCEPT -m comment --comment 111 keystone ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/ensure: created", > "Debug: Firewall[111 keystone ipv4](provider=iptables): [flush]", > "Debug: Firewall[111 keystone ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[111 keystone ipv4]: The container Tripleo::Firewall::Rule[111 keystone] will propagate my refresh event", > "Debug: Firewall[111 keystone ipv6](provider=ip6tables): Inserting rule 111 keystone ipv6", > "Debug: Firewall[111 keystone ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[111 keystone ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 5000,13000,35357 -m state --state NEW -j ACCEPT -m comment --comment 111 keystone ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/ensure: created", > "Debug: Firewall[111 keystone ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[111 keystone ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[111 keystone ipv6]: The container Tripleo::Firewall::Rule[111 keystone] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[111 keystone]: The container Tripleo::Firewall::Service_rules[keystone] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[keystone]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[121 memcached ipv4](provider=iptables): Inserting rule 121 memcached ipv4", > "Debug: Firewall[121 memcached ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[121 memcached ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -s 172.17.1.0/24 -p tcp -m multiport --dports 11211 -m state --state NEW -j ACCEPT -m comment --comment 121 memcached ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/ensure: created", > "Debug: Firewall[121 memcached ipv4](provider=iptables): [flush]", > "Debug: Firewall[121 memcached ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[121 memcached ipv4]: The container Tripleo::Firewall::Rule[121 memcached] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[121 memcached]: The container Tripleo::Firewall::Service_rules[memcached] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[memcached]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[104 mysql galera-bundle ipv4](provider=iptables): Inserting rule 104 mysql galera-bundle ipv4", > "Debug: Firewall[104 mysql galera-bundle ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[104 mysql galera-bundle ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 873,3123,3306,4444,4567,4568,9200 -m state --state NEW -j ACCEPT -m comment --comment 104 mysql galera-bundle ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/ensure: created", > "Debug: Firewall[104 mysql galera-bundle ipv4](provider=iptables): [flush]", > "Debug: Firewall[104 mysql galera-bundle ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: The container Tripleo::Firewall::Rule[104 mysql galera-bundle] will propagate my refresh event", > "Debug: Firewall[104 mysql galera-bundle ipv6](provider=ip6tables): Inserting rule 104 mysql galera-bundle ipv6", > "Debug: Firewall[104 mysql galera-bundle ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[104 mysql galera-bundle ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 873,3123,3306,4444,4567,4568,9200 -m state --state NEW -j ACCEPT -m comment --comment 104 mysql galera-bundle ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/ensure: created", > "Debug: Firewall[104 mysql galera-bundle ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[104 mysql galera-bundle ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: The container Tripleo::Firewall::Rule[104 mysql galera-bundle] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[104 mysql galera-bundle]: The container Tripleo::Firewall::Service_rules[mysql] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[mysql]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[114 neutron api ipv4](provider=iptables): Inserting rule 114 neutron api ipv4", > "Debug: Firewall[114 neutron api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[114 neutron api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 9696,13696 -m state --state NEW -j ACCEPT -m comment --comment 114 neutron api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/ensure: created", > "Debug: Firewall[114 neutron api ipv4](provider=iptables): [flush]", > "Debug: Firewall[114 neutron api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[114 neutron api ipv4]: The container Tripleo::Firewall::Rule[114 neutron api] will propagate my refresh event", > "Debug: Firewall[114 neutron api ipv6](provider=ip6tables): Inserting rule 114 neutron api ipv6", > "Debug: Firewall[114 neutron api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[114 neutron api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 9696,13696 -m state --state NEW -j ACCEPT -m comment --comment 114 neutron api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/ensure: created", > "Debug: Firewall[114 neutron api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[114 neutron api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[114 neutron api ipv6]: The container Tripleo::Firewall::Rule[114 neutron api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[114 neutron api]: The container Tripleo::Firewall::Service_rules[neutron_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[neutron_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[115 neutron dhcp input ipv4](provider=iptables): Inserting rule 115 neutron dhcp input ipv4", > "Debug: Firewall[115 neutron dhcp input ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[115 neutron dhcp input ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p udp -m multiport --dports 67 -m state --state NEW -j ACCEPT -m comment --comment 115 neutron dhcp input ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/ensure: created", > "Debug: Firewall[115 neutron dhcp input ipv4](provider=iptables): [flush]", > "Debug: Firewall[115 neutron dhcp input ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: The container Tripleo::Firewall::Rule[115 neutron dhcp input] will propagate my refresh event", > "Debug: Firewall[115 neutron dhcp input ipv6](provider=ip6tables): Inserting rule 115 neutron dhcp input ipv6", > "Debug: Firewall[115 neutron dhcp input ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[115 neutron dhcp input ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p udp -m multiport --dports 67 -m state --state NEW -j ACCEPT -m comment --comment 115 neutron dhcp input ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/ensure: created", > "Debug: Firewall[115 neutron dhcp input ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[115 neutron dhcp input ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: The container Tripleo::Firewall::Rule[115 neutron dhcp input] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[115 neutron dhcp input]: The container Tripleo::Firewall::Service_rules[neutron_dhcp] will propagate my refresh event", > "Debug: Firewall[116 neutron dhcp output ipv4](provider=iptables): Inserting rule 116 neutron dhcp output ipv4", > "Debug: Firewall[116 neutron dhcp output ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[116 neutron dhcp output ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I OUTPUT 1 --wait -t filter -p udp -m multiport --dports 68 -m state --state NEW -j ACCEPT -m comment --comment 116 neutron dhcp output ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/ensure: created", > "Debug: Firewall[116 neutron dhcp output ipv4](provider=iptables): [flush]", > "Debug: Firewall[116 neutron dhcp output ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: The container Tripleo::Firewall::Rule[116 neutron dhcp output] will propagate my refresh event", > "Debug: Firewall[116 neutron dhcp output ipv6](provider=ip6tables): Inserting rule 116 neutron dhcp output ipv6", > "Debug: Firewall[116 neutron dhcp output ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[116 neutron dhcp output ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I OUTPUT 1 --wait -t filter -p udp -m multiport --dports 68 -m state --state NEW -j ACCEPT -m comment --comment 116 neutron dhcp output ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/ensure: created", > "Debug: Firewall[116 neutron dhcp output ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[116 neutron dhcp output ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: The container Tripleo::Firewall::Rule[116 neutron dhcp output] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[116 neutron dhcp output]: The container Tripleo::Firewall::Service_rules[neutron_dhcp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[neutron_dhcp]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[106 neutron_l3 vrrp ipv4](provider=iptables): Inserting rule 106 neutron_l3 vrrp ipv4", > "Debug: Firewall[106 neutron_l3 vrrp ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[106 neutron_l3 vrrp ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p vrrp -m state --state NEW -j ACCEPT -m comment --comment 106 neutron_l3 vrrp ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/ensure: created", > "Debug: Firewall[106 neutron_l3 vrrp ipv4](provider=iptables): [flush]", > "Debug: Firewall[106 neutron_l3 vrrp ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: The container Tripleo::Firewall::Rule[106 neutron_l3 vrrp] will propagate my refresh event", > "Debug: Firewall[106 neutron_l3 vrrp ipv6](provider=ip6tables): Inserting rule 106 neutron_l3 vrrp ipv6", > "Debug: Firewall[106 neutron_l3 vrrp ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[106 neutron_l3 vrrp ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p vrrp -m state --state NEW -j ACCEPT -m comment --comment 106 neutron_l3 vrrp ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/ensure: created", > "Debug: Firewall[106 neutron_l3 vrrp ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[106 neutron_l3 vrrp ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: The container Tripleo::Firewall::Rule[106 neutron_l3 vrrp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[106 neutron_l3 vrrp]: The container Tripleo::Firewall::Service_rules[neutron_l3] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[neutron_l3]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[118 neutron vxlan networks ipv4](provider=iptables): Inserting rule 118 neutron vxlan networks ipv4", > "Debug: Firewall[118 neutron vxlan networks ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[118 neutron vxlan networks ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 14 --wait -t filter -p udp -m multiport --dports 4789 -m state --state NEW -j ACCEPT -m comment --comment 118 neutron vxlan networks ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Debug: Firewall[118 neutron vxlan networks ipv4](provider=iptables): [flush]", > "Debug: Firewall[118 neutron vxlan networks ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: The container Tripleo::Firewall::Rule[118 neutron vxlan networks] will propagate my refresh event", > "Debug: Firewall[118 neutron vxlan networks ipv6](provider=ip6tables): Inserting rule 118 neutron vxlan networks ipv6", > "Debug: Firewall[118 neutron vxlan networks ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[118 neutron vxlan networks ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 15 --wait -t filter -p udp -m multiport --dports 4789 -m state --state NEW -j ACCEPT -m comment --comment 118 neutron vxlan networks ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Debug: Firewall[118 neutron vxlan networks ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[118 neutron vxlan networks ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: The container Tripleo::Firewall::Rule[118 neutron vxlan networks] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[118 neutron vxlan networks]: The container Tripleo::Firewall::Service_rules[neutron_ovs_agent] will propagate my refresh event", > "Debug: Firewall[136 neutron gre networks ipv4](provider=iptables): Inserting rule 136 neutron gre networks ipv4", > "Debug: Firewall[136 neutron gre networks ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[136 neutron gre networks ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 23 --wait -t filter -p gre -j ACCEPT -m comment --comment 136 neutron gre networks ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Debug: Firewall[136 neutron gre networks ipv4](provider=iptables): [flush]", > "Debug: Firewall[136 neutron gre networks ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: The container Tripleo::Firewall::Rule[136 neutron gre networks] will propagate my refresh event", > "Debug: Firewall[136 neutron gre networks ipv6](provider=ip6tables): Inserting rule 136 neutron gre networks ipv6", > "Debug: Firewall[136 neutron gre networks ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[136 neutron gre networks ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 23 --wait -t filter -p gre -j ACCEPT -m comment --comment 136 neutron gre networks ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Debug: Firewall[136 neutron gre networks ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[136 neutron gre networks ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: The container Tripleo::Firewall::Rule[136 neutron gre networks] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[136 neutron gre networks]: The container Tripleo::Firewall::Service_rules[neutron_ovs_agent] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[neutron_ovs_agent]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[113 nova_api ipv4](provider=iptables): Inserting rule 113 nova_api ipv4", > "Debug: Firewall[113 nova_api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[113 nova_api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8774,13774,8775 -m state --state NEW -j ACCEPT -m comment --comment 113 nova_api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/ensure: created", > "Debug: Firewall[113 nova_api ipv4](provider=iptables): [flush]", > "Debug: Firewall[113 nova_api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[113 nova_api ipv4]: The container Tripleo::Firewall::Rule[113 nova_api] will propagate my refresh event", > "Debug: Firewall[113 nova_api ipv6](provider=ip6tables): Inserting rule 113 nova_api ipv6", > "Debug: Firewall[113 nova_api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[113 nova_api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 8774,13774,8775 -m state --state NEW -j ACCEPT -m comment --comment 113 nova_api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/ensure: created", > "Debug: Firewall[113 nova_api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[113 nova_api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[113 nova_api ipv6]: The container Tripleo::Firewall::Rule[113 nova_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[113 nova_api]: The container Tripleo::Firewall::Service_rules[nova_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[nova_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[138 nova_placement ipv4](provider=iptables): Inserting rule 138 nova_placement ipv4", > "Debug: Firewall[138 nova_placement ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[138 nova_placement ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 8778,13778 -m state --state NEW -j ACCEPT -m comment --comment 138 nova_placement ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/ensure: created", > "Debug: Firewall[138 nova_placement ipv4](provider=iptables): [flush]", > "Debug: Firewall[138 nova_placement ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: The container Tripleo::Firewall::Rule[138 nova_placement] will propagate my refresh event", > "Debug: Firewall[138 nova_placement ipv6](provider=ip6tables): Inserting rule 138 nova_placement ipv6", > "Debug: Firewall[138 nova_placement ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[138 nova_placement ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 8778,13778 -m state --state NEW -j ACCEPT -m comment --comment 138 nova_placement ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/ensure: created", > "Debug: Firewall[138 nova_placement ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[138 nova_placement ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: The container Tripleo::Firewall::Rule[138 nova_placement] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[138 nova_placement]: The container Tripleo::Firewall::Service_rules[nova_placement] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[nova_placement]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[137 nova_vnc_proxy ipv4](provider=iptables): Inserting rule 137 nova_vnc_proxy ipv4", > "Debug: Firewall[137 nova_vnc_proxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[137 nova_vnc_proxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 6080,13080 -m state --state NEW -j ACCEPT -m comment --comment 137 nova_vnc_proxy ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/ensure: created", > "Debug: Firewall[137 nova_vnc_proxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[137 nova_vnc_proxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: The container Tripleo::Firewall::Rule[137 nova_vnc_proxy] will propagate my refresh event", > "Debug: Firewall[137 nova_vnc_proxy ipv6](provider=ip6tables): Inserting rule 137 nova_vnc_proxy ipv6", > "Debug: Firewall[137 nova_vnc_proxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[137 nova_vnc_proxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 6080,13080 -m state --state NEW -j ACCEPT -m comment --comment 137 nova_vnc_proxy ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/ensure: created", > "Debug: Firewall[137 nova_vnc_proxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[137 nova_vnc_proxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: The container Tripleo::Firewall::Rule[137 nova_vnc_proxy] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[137 nova_vnc_proxy]: The container Tripleo::Firewall::Service_rules[nova_vnc_proxy] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[nova_vnc_proxy]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[105 ntp ipv4](provider=iptables): Inserting rule 105 ntp ipv4", > "Debug: Firewall[105 ntp ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[105 ntp ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p udp -m multiport --dports 123 -m state --state NEW -j ACCEPT -m comment --comment 105 ntp ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Debug: Firewall[105 ntp ipv4](provider=iptables): [flush]", > "Debug: Firewall[105 ntp ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[105 ntp ipv4]: The container Tripleo::Firewall::Rule[105 ntp] will propagate my refresh event", > "Debug: Firewall[105 ntp ipv6](provider=ip6tables): Inserting rule 105 ntp ipv6", > "Debug: Firewall[105 ntp ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[105 ntp ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p udp -m multiport --dports 123 -m state --state NEW -j ACCEPT -m comment --comment 105 ntp ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Debug: Firewall[105 ntp ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[105 ntp ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[105 ntp ipv6]: The container Tripleo::Firewall::Rule[105 ntp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[105 ntp]: The container Tripleo::Firewall::Service_rules[ntp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[ntp]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[130 pacemaker tcp ipv4](provider=iptables): Inserting rule 130 pacemaker tcp ipv4", > "Debug: Firewall[130 pacemaker tcp ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[130 pacemaker tcp ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 2224,3121,21064 -m state --state NEW -j ACCEPT -m comment --comment 130 pacemaker tcp ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/ensure: created", > "Debug: Firewall[130 pacemaker tcp ipv4](provider=iptables): [flush]", > "Debug: Firewall[130 pacemaker tcp ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: The container Tripleo::Firewall::Rule[130 pacemaker tcp] will propagate my refresh event", > "Debug: Firewall[130 pacemaker tcp ipv6](provider=ip6tables): Inserting rule 130 pacemaker tcp ipv6", > "Debug: Firewall[130 pacemaker tcp ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[130 pacemaker tcp ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 2224,3121,21064 -m state --state NEW -j ACCEPT -m comment --comment 130 pacemaker tcp ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/ensure: created", > "Debug: Firewall[130 pacemaker tcp ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[130 pacemaker tcp ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: The container Tripleo::Firewall::Rule[130 pacemaker tcp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[130 pacemaker tcp]: The container Tripleo::Firewall::Service_rules[pacemaker] will propagate my refresh event", > "Debug: Firewall[131 pacemaker udp ipv4](provider=iptables): Inserting rule 131 pacemaker udp ipv4", > "Debug: Firewall[131 pacemaker udp ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[131 pacemaker udp ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 26 --wait -t filter -p udp -m multiport --dports 5405 -m state --state NEW -j ACCEPT -m comment --comment 131 pacemaker udp ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/ensure: created", > "Debug: Firewall[131 pacemaker udp ipv4](provider=iptables): [flush]", > "Debug: Firewall[131 pacemaker udp ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: The container Tripleo::Firewall::Rule[131 pacemaker udp] will propagate my refresh event", > "Debug: Firewall[131 pacemaker udp ipv6](provider=ip6tables): Inserting rule 131 pacemaker udp ipv6", > "Debug: Firewall[131 pacemaker udp ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[131 pacemaker udp ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 26 --wait -t filter -p udp -m multiport --dports 5405 -m state --state NEW -j ACCEPT -m comment --comment 131 pacemaker udp ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/ensure: created", > "Debug: Firewall[131 pacemaker udp ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[131 pacemaker udp ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: The container Tripleo::Firewall::Rule[131 pacemaker udp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[131 pacemaker udp]: The container Tripleo::Firewall::Service_rules[pacemaker] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[pacemaker]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[140 panko-api ipv4](provider=iptables): Inserting rule 140 panko-api ipv4", > "Debug: Firewall[140 panko-api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[140 panko-api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 31 --wait -t filter -p tcp -m multiport --dports 8977,13977 -m state --state NEW -j ACCEPT -m comment --comment 140 panko-api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/ensure: created", > "Debug: Firewall[140 panko-api ipv4](provider=iptables): [flush]", > "Debug: Firewall[140 panko-api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[140 panko-api ipv4]: The container Tripleo::Firewall::Rule[140 panko-api] will propagate my refresh event", > "Debug: Firewall[140 panko-api ipv6](provider=ip6tables): Inserting rule 140 panko-api ipv6", > "Debug: Firewall[140 panko-api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[140 panko-api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 31 --wait -t filter -p tcp -m multiport --dports 8977,13977 -m state --state NEW -j ACCEPT -m comment --comment 140 panko-api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/ensure: created", > "Debug: Firewall[140 panko-api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[140 panko-api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[140 panko-api ipv6]: The container Tripleo::Firewall::Rule[140 panko-api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[140 panko-api]: The container Tripleo::Firewall::Service_rules[panko_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[panko_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[109 rabbitmq-bundle ipv4](provider=iptables): Inserting rule 109 rabbitmq-bundle ipv4", > "Debug: Firewall[109 rabbitmq-bundle ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[109 rabbitmq-bundle ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 3122,4369,5672,25672 -m state --state NEW -j ACCEPT -m comment --comment 109 rabbitmq-bundle ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/ensure: created", > "Debug: Firewall[109 rabbitmq-bundle ipv4](provider=iptables): [flush]", > "Debug: Firewall[109 rabbitmq-bundle ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: The container Tripleo::Firewall::Rule[109 rabbitmq-bundle] will propagate my refresh event", > "Debug: Firewall[109 rabbitmq-bundle ipv6](provider=ip6tables): Inserting rule 109 rabbitmq-bundle ipv6", > "Debug: Firewall[109 rabbitmq-bundle ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[109 rabbitmq-bundle ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 10 --wait -t filter -p tcp -m multiport --dports 3122,4369,5672,25672 -m state --state NEW -j ACCEPT -m comment --comment 109 rabbitmq-bundle ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/ensure: created", > "Debug: Firewall[109 rabbitmq-bundle ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[109 rabbitmq-bundle ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: The container Tripleo::Firewall::Rule[109 rabbitmq-bundle] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[109 rabbitmq-bundle]: The container Tripleo::Firewall::Service_rules[rabbitmq] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[rabbitmq]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[108 redis-bundle ipv4](provider=iptables): Inserting rule 108 redis-bundle ipv4", > "Debug: Firewall[108 redis-bundle ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[108 redis-bundle ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 3124,6379,26379 -m state --state NEW -j ACCEPT -m comment --comment 108 redis-bundle ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/ensure: created", > "Debug: Firewall[108 redis-bundle ipv4](provider=iptables): [flush]", > "Debug: Firewall[108 redis-bundle ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: The container Tripleo::Firewall::Rule[108 redis-bundle] will propagate my refresh event", > "Debug: Firewall[108 redis-bundle ipv6](provider=ip6tables): Inserting rule 108 redis-bundle ipv6", > "Debug: Firewall[108 redis-bundle ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[108 redis-bundle ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 10 --wait -t filter -p tcp -m multiport --dports 3124,6379,26379 -m state --state NEW -j ACCEPT -m comment --comment 108 redis-bundle ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/ensure: created", > "Debug: Firewall[108 redis-bundle ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[108 redis-bundle ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: The container Tripleo::Firewall::Rule[108 redis-bundle] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[108 redis-bundle]: The container Tripleo::Firewall::Service_rules[redis] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[redis]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[122 swift proxy ipv4](provider=iptables): Inserting rule 122 swift proxy ipv4", > "Debug: Firewall[122 swift proxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[122 swift proxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 22 --wait -t filter -p tcp -m multiport --dports 8080,13808 -m state --state NEW -j ACCEPT -m comment --comment 122 swift proxy ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/ensure: created", > "Debug: Firewall[122 swift proxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[122 swift proxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: The container Tripleo::Firewall::Rule[122 swift proxy] will propagate my refresh event", > "Debug: Firewall[122 swift proxy ipv6](provider=ip6tables): Inserting rule 122 swift proxy ipv6", > "Debug: Firewall[122 swift proxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[122 swift proxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 22 --wait -t filter -p tcp -m multiport --dports 8080,13808 -m state --state NEW -j ACCEPT -m comment --comment 122 swift proxy ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/ensure: created", > "Debug: Firewall[122 swift proxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[122 swift proxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: The container Tripleo::Firewall::Rule[122 swift proxy] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[122 swift proxy]: The container Tripleo::Firewall::Service_rules[swift_proxy] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[swift_proxy]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[123 swift storage ipv4](provider=iptables): Inserting rule 123 swift storage ipv4", > "Debug: Firewall[123 swift storage ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[123 swift storage ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 23 --wait -t filter -p tcp -m multiport --dports 873,6000,6001,6002 -m state --state NEW -j ACCEPT -m comment --comment 123 swift storage ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/ensure: created", > "Debug: Firewall[123 swift storage ipv4](provider=iptables): [flush]", > "Debug: Firewall[123 swift storage ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[123 swift storage ipv4]: The container Tripleo::Firewall::Rule[123 swift storage] will propagate my refresh event", > "Debug: Firewall[123 swift storage ipv6](provider=ip6tables): Inserting rule 123 swift storage ipv6", > "Debug: Firewall[123 swift storage ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[123 swift storage ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 23 --wait -t filter -p tcp -m multiport --dports 873,6000,6001,6002 -m state --state NEW -j ACCEPT -m comment --comment 123 swift storage ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/ensure: created", > "Debug: Firewall[123 swift storage ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[123 swift storage ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[123 swift storage ipv6]: The container Tripleo::Firewall::Rule[123 swift storage] will propagate my refresh event", > "Debug: Class[Firewall::Linux::Redhat]: The container Stage[main] will propagate my refresh event", > "Debug: Exec[nonpersistent_v4_rules_cleanup](provider=posix): Executing check '/bin/test -f /etc/sysconfig/iptables && /bin/grep -q neutron- /etc/sysconfig/iptables'", > "Debug: Executing: '/bin/test -f /etc/sysconfig/iptables && /bin/grep -q neutron- /etc/sysconfig/iptables'", > "Debug: Exec[nonpersistent_v6_rules_cleanup](provider=posix): Executing check '/bin/test -f /etc/sysconfig/ip6tables && /bin/grep -q neutron- /etc/sysconfig/ip6tables'", > "Debug: Executing: '/bin/test -f /etc/sysconfig/ip6tables && /bin/grep -q neutron- /etc/sysconfig/ip6tables'", > "Debug: Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup](provider=posix): Executing check '/bin/test -f /etc/sysconfig/iptables'", > "Debug: Executing: '/bin/test -f /etc/sysconfig/iptables'", > "Debug: Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup](provider=posix): Executing check '/bin/grep -v \"\\-m comment \\--comment\" /etc/sysconfig/iptables | /bin/grep -q ironic-inspector'", > "Debug: Executing: '/bin/grep -v \"\\-m comment \\--comment\" /etc/sysconfig/iptables | /bin/grep -q ironic-inspector'", > "Debug: Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup](provider=posix): Executing check '/bin/test -f /etc/sysconfig/ip6tables'", > "Debug: Executing: '/bin/test -f /etc/sysconfig/ip6tables'", > "Debug: Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup](provider=posix): Executing check '/bin/grep -v \"\\-m comment \\--comment\" /etc/sysconfig/ip6tables | /bin/grep -q ironic-inspector'", > "Debug: Executing: '/bin/grep -v \"\\-m comment \\--comment\" /etc/sysconfig/ip6tables | /bin/grep -q ironic-inspector'", > "Debug: Tripleo::Firewall::Rule[123 swift storage]: The container Tripleo::Firewall::Service_rules[swift_storage] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[swift_storage]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Class[Tripleo::Firewall]: The container Stage[main] will propagate my refresh event", > "Debug: Finishing transaction 32725280", > "Debug: Storing state", > "Info: Creating state file /var/lib/puppet/state/state.yaml", > "Debug: Stored state in 0.02 seconds", > "Notice: Applied catalog in 98.21 seconds", > "Changes:", > " Total: 172", > "Events:", > " Success: 172", > "Resources:", > " Changed: 168", > " Out of sync: 168", > " Total: 215", > " Restarted: 4", > "Time:", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " File line: 0.00", > " Package manifest: 0.00", > " Group: 0.02", > " User: 0.05", > " Sysctl: 0.07", > " File: 0.20", > " Sysctl runtime: 0.22", > " Augeas: 0.33", > " Package: 0.46", > " Firewall: 15.72", > " Last run: 1534432975", > " Service: 4.16", > " Config retrieval: 4.99", > " Pcmk property: 5.22", > " Exec: 56.49", > " Total: 87.93", > " Filebucket: 0.00", > " Concat fragment: 0.00", > "Version:", > " Config: 1534432872", > " Puppet: 4.8.2", > "Debug: Applying settings catalog for sections reporting, metrics", > "Debug: Finishing transaction 45600380", > "Debug: Received report to process from controller-0.localdomain", > "Debug: Processing report from controller-0.localdomain with processor Puppet::Reports::Store", > "erlexec: HOME must be set", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ip_address instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp\", 56]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 35]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ssh/manifests/server.pp\", 12]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 42]" > ] > } > > TASK [Run docker-puppet tasks (generate config) during step 1] ***************** > ok: [localhost] > > TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 1] *** > fatal: [localhost]: FAILED! => { > "failed_when_result": true, > "outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": [ > "2018-08-16 15:23:00,142 INFO: 23520 -- Running docker-puppet", > "2018-08-16 15:23:00,144 INFO: 23520 -- Service compilation completed.", > "2018-08-16 15:23:00,145 INFO: 23520 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-08-16 15:23:00,156 INFO: 23521 -- Starting configuration of nova_placement using image 192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-08-14.4", > "2018-08-16 15:23:00,156 INFO: 23522 -- Starting configuration of heat_api using image 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4", > "2018-08-16 15:23:00,157 INFO: 23523 -- Starting configuration of mysql using image 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4", > "2018-08-16 15:23:00,158 INFO: 23522 -- Removing container: docker-puppet-heat_api", > "2018-08-16 15:23:00,158 INFO: 23521 -- Removing container: docker-puppet-nova_placement", > "2018-08-16 15:23:00,159 INFO: 23523 -- Removing container: docker-puppet-mysql", > "2018-08-16 15:23:00,198 INFO: 23522 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4", > "2018-08-16 15:23:00,198 INFO: 23521 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-08-14.4", > "2018-08-16 15:23:00,200 INFO: 23523 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4", > "2018-08-16 15:23:33,198 INFO: 23523 -- Removing container: docker-puppet-mysql", > "2018-08-16 15:23:33,238 INFO: 23523 -- Finished processing puppet configs for mysql", > "2018-08-16 15:23:33,239 INFO: 23523 -- Starting configuration of gnocchi using image 192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4", > "2018-08-16 15:23:33,240 INFO: 23523 -- Removing container: docker-puppet-gnocchi", > "2018-08-16 15:23:33,264 INFO: 23523 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4", > "2018-08-16 15:23:35,878 INFO: 23522 -- Removing container: docker-puppet-heat_api", > "2018-08-16 15:23:35,941 INFO: 23522 -- Finished processing puppet configs for heat_api", > "2018-08-16 15:23:35,941 INFO: 23522 -- Starting configuration of swift_ringbuilder using image 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4", > "2018-08-16 15:23:35,942 INFO: 23522 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-08-16 15:23:35,970 INFO: 23522 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4", > "2018-08-16 15:23:37,351 INFO: 23521 -- Removing container: docker-puppet-nova_placement", > "2018-08-16 15:23:37,408 INFO: 23521 -- Finished processing puppet configs for nova_placement", > "2018-08-16 15:23:37,408 INFO: 23521 -- Starting configuration of aodh using image 192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-08-14.4", > "2018-08-16 15:23:37,409 INFO: 23521 -- Removing container: docker-puppet-aodh", > "2018-08-16 15:23:37,434 INFO: 23521 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-08-14.4", > "2018-08-16 15:23:51,135 INFO: 23522 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-08-16 15:23:51,188 INFO: 23522 -- Finished processing puppet configs for swift_ringbuilder", > "2018-08-16 15:23:51,189 INFO: 23522 -- Starting configuration of clustercheck using image 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4", > "2018-08-16 15:23:51,190 INFO: 23522 -- Removing container: docker-puppet-clustercheck", > "2018-08-16 15:23:51,214 INFO: 23522 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4", > "2018-08-16 15:23:54,439 INFO: 23521 -- Removing container: docker-puppet-aodh", > "2018-08-16 15:23:54,497 INFO: 23521 -- Finished processing puppet configs for aodh", > "2018-08-16 15:23:54,497 INFO: 23521 -- Starting configuration of nova using image 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4", > "2018-08-16 15:23:54,498 INFO: 23521 -- Removing container: docker-puppet-nova", > "2018-08-16 15:23:54,528 INFO: 23521 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4", > "2018-08-16 15:23:54,544 INFO: 23523 -- Removing container: docker-puppet-gnocchi", > "2018-08-16 15:23:54,603 INFO: 23523 -- Finished processing puppet configs for gnocchi", > "2018-08-16 15:23:54,603 INFO: 23523 -- Starting configuration of glance_api using image 192.168.24.1:8787/rhosp13/openstack-glance-api:2018-08-14.4", > "2018-08-16 15:23:54,603 INFO: 23523 -- Removing container: docker-puppet-glance_api", > "2018-08-16 15:23:54,632 INFO: 23523 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-glance-api:2018-08-14.4", > "2018-08-16 15:23:58,585 INFO: 23522 -- Removing container: docker-puppet-clustercheck", > "2018-08-16 15:23:58,630 INFO: 23522 -- Finished processing puppet configs for clustercheck", > "2018-08-16 15:23:58,630 INFO: 23522 -- Starting configuration of redis using image 192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4", > "2018-08-16 15:23:58,631 INFO: 23522 -- Removing container: docker-puppet-redis", > "2018-08-16 15:23:58,656 INFO: 23522 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4", > "2018-08-16 15:24:10,283 INFO: 23522 -- Removing container: docker-puppet-redis", > "2018-08-16 15:24:10,332 INFO: 23522 -- Finished processing puppet configs for redis", > "2018-08-16 15:24:10,332 INFO: 23522 -- Starting configuration of memcached using image 192.168.24.1:8787/rhosp13/openstack-memcached:2018-08-14.4", > "2018-08-16 15:24:10,333 INFO: 23522 -- Removing container: docker-puppet-memcached", > "2018-08-16 15:24:10,356 INFO: 23522 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-memcached:2018-08-14.4", > "2018-08-16 15:24:14,100 INFO: 23521 -- Removing container: docker-puppet-nova", > "2018-08-16 15:24:14,163 INFO: 23521 -- Finished processing puppet configs for nova", > "2018-08-16 15:24:14,163 INFO: 23521 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp13/openstack-iscsid:2018-08-14.4", > "2018-08-16 15:24:14,163 INFO: 23521 -- Removing container: docker-puppet-iscsid", > "2018-08-16 15:24:14,191 INFO: 23521 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-iscsid:2018-08-14.4", > "2018-08-16 15:24:15,201 INFO: 23523 -- Removing container: docker-puppet-glance_api", > "2018-08-16 15:24:15,352 INFO: 23523 -- Finished processing puppet configs for glance_api", > "2018-08-16 15:24:15,352 INFO: 23523 -- Starting configuration of keystone using image 192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4", > "2018-08-16 15:24:15,352 INFO: 23523 -- Removing container: docker-puppet-keystone", > "2018-08-16 15:24:15,375 INFO: 23523 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4", > "2018-08-16 15:24:20,382 INFO: 23522 -- Removing container: docker-puppet-memcached", > "2018-08-16 15:24:20,421 INFO: 23522 -- Finished processing puppet configs for memcached", > "2018-08-16 15:24:20,421 INFO: 23522 -- Starting configuration of panko using image 192.168.24.1:8787/rhosp13/openstack-panko-api:2018-08-14.4", > "2018-08-16 15:24:20,421 INFO: 23522 -- Removing container: docker-puppet-panko", > "2018-08-16 15:24:20,453 INFO: 23522 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-panko-api:2018-08-14.4", > "2018-08-16 15:24:22,192 INFO: 23521 -- Removing container: docker-puppet-iscsid", > "2018-08-16 15:24:22,238 INFO: 23521 -- Finished processing puppet configs for iscsid", > "2018-08-16 15:24:22,239 INFO: 23521 -- Starting configuration of heat using image 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4", > "2018-08-16 15:24:22,239 INFO: 23521 -- Removing container: docker-puppet-heat", > "2018-08-16 15:24:22,272 INFO: 23521 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4", > "2018-08-16 15:24:32,979 INFO: 23523 -- Removing container: docker-puppet-keystone", > "2018-08-16 15:24:33,039 INFO: 23523 -- Finished processing puppet configs for keystone", > "2018-08-16 15:24:33,039 INFO: 23523 -- Starting configuration of swift using image 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4", > "2018-08-16 15:24:33,040 INFO: 23523 -- Removing container: docker-puppet-swift", > "2018-08-16 15:24:33,065 INFO: 23523 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4", > "2018-08-16 15:24:34,484 INFO: 23521 -- Removing container: docker-puppet-heat", > "2018-08-16 15:24:34,527 INFO: 23521 -- Finished processing puppet configs for heat", > "2018-08-16 15:24:34,527 INFO: 23521 -- Starting configuration of cinder using image 192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4", > "2018-08-16 15:24:34,528 INFO: 23521 -- Removing container: docker-puppet-cinder", > "2018-08-16 15:24:34,551 INFO: 23521 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4", > "2018-08-16 15:24:36,983 INFO: 23522 -- Removing container: docker-puppet-panko", > "2018-08-16 15:24:37,051 INFO: 23522 -- Finished processing puppet configs for panko", > "2018-08-16 15:24:37,051 INFO: 23522 -- Starting configuration of haproxy using image 192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4", > "2018-08-16 15:24:37,052 INFO: 23522 -- Removing container: docker-puppet-haproxy", > "2018-08-16 15:24:37,083 INFO: 23522 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4", > "2018-08-16 15:24:44,323 INFO: 23523 -- Removing container: docker-puppet-swift", > "2018-08-16 15:24:44,379 INFO: 23523 -- Finished processing puppet configs for swift", > "2018-08-16 15:24:44,379 INFO: 23523 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp13/openstack-cron:2018-08-14.4", > "2018-08-16 15:24:44,380 INFO: 23523 -- Removing container: docker-puppet-crond", > "2018-08-16 15:24:44,405 INFO: 23523 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-cron:2018-08-14.4", > "2018-08-16 15:24:49,039 ERROR: 23522 -- Failed running docker-puppet.py for haproxy", > "2018-08-16 15:24:49,039 ERROR: 23522 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "", > "2018-08-16 15:24:49,039 ERROR: 23522 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron,haproxy_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,haproxy_config'", > "+ origin_of_time=/var/lib/config-data/haproxy.origin_of_time", > "+ touch /var/lib/config-data/haproxy.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=controller-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,haproxy_config /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: Unknown variable: 'haproxy_member_options_real'. at /etc/puppet/modules/tripleo/manifests/haproxy.pp:1082:34", > "Error: Evaluation Error: Error while evaluating a Function Call, union(): Every parameter must be an array at /etc/puppet/modules/tripleo/manifests/haproxy.pp:1082:28 on node controller-0.localdomain", > "+ rc=1", > "+ set -e", > "+ '[' 1 -ne 2 -a 1 -ne 0 ']'", > "+ exit 1", > "2018-08-16 15:24:49,039 INFO: 23522 -- Finished processing puppet configs for haproxy", > "2018-08-16 15:24:49,040 INFO: 23522 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-08-14.4", > "2018-08-16 15:24:49,040 INFO: 23522 -- Removing container: docker-puppet-ceilometer", > "2018-08-16 15:24:49,067 INFO: 23522 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-08-14.4", > "2018-08-16 15:24:51,789 INFO: 23523 -- Removing container: docker-puppet-crond", > "2018-08-16 15:24:51,863 INFO: 23523 -- Finished processing puppet configs for crond", > "2018-08-16 15:24:51,864 INFO: 23523 -- Starting configuration of rabbitmq using image 192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4", > "2018-08-16 15:24:51,864 INFO: 23523 -- Removing container: docker-puppet-rabbitmq", > "2018-08-16 15:24:51,901 INFO: 23523 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4", > "2018-08-16 15:25:01,663 INFO: 23522 -- Removing container: docker-puppet-ceilometer", > "2018-08-16 15:25:01,705 INFO: 23522 -- Finished processing puppet configs for ceilometer", > "2018-08-16 15:25:01,705 INFO: 23522 -- Starting configuration of horizon using image 192.168.24.1:8787/rhosp13/openstack-horizon:2018-08-14.4", > "2018-08-16 15:25:01,706 INFO: 23522 -- Removing container: docker-puppet-horizon", > "2018-08-16 15:25:01,729 INFO: 23522 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-horizon:2018-08-14.4", > "2018-08-16 15:25:03,320 INFO: 23521 -- Removing container: docker-puppet-cinder", > "2018-08-16 15:25:03,371 INFO: 23521 -- Finished processing puppet configs for cinder", > "2018-08-16 15:25:10,029 INFO: 23523 -- Removing container: docker-puppet-rabbitmq", > "2018-08-16 15:25:10,074 INFO: 23523 -- Finished processing puppet configs for rabbitmq", > "2018-08-16 15:25:10,074 INFO: 23523 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4", > "2018-08-16 15:25:10,076 INFO: 23523 -- Removing container: docker-puppet-neutron", > "2018-08-16 15:25:10,102 INFO: 23523 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4", > "2018-08-16 15:25:19,616 INFO: 23522 -- Removing container: docker-puppet-horizon", > "2018-08-16 15:25:19,671 INFO: 23522 -- Finished processing puppet configs for horizon", > "2018-08-16 15:25:19,671 INFO: 23522 -- Starting configuration of heat_api_cfn using image 192.168.24.1:8787/rhosp13/openstack-heat-api-cfn:2018-08-14.4", > "2018-08-16 15:25:19,672 INFO: 23522 -- Removing container: docker-puppet-heat_api_cfn", > "2018-08-16 15:25:19,696 INFO: 23522 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-heat-api-cfn:2018-08-14.4", > "2018-08-16 15:25:27,585 INFO: 23523 -- Removing container: docker-puppet-neutron", > "2018-08-16 15:25:27,619 INFO: 23523 -- Finished processing puppet configs for neutron", > "2018-08-16 15:25:32,384 INFO: 23522 -- Removing container: docker-puppet-heat_api_cfn", > "2018-08-16 15:25:32,427 INFO: 23522 -- Finished processing puppet configs for heat_api_cfn", > "2018-08-16 15:25:32,428 ERROR: 23520 -- ERROR configuring haproxy" > ] > } > to retry, use: --limit @/var/lib/heat-config/heat-config-ansible/a23e6105-e8dd-47e0-a897-af2def95a4c4_playbook.retry > > PLAY RECAP ********************************************************************* > localhost : ok=26 changed=13 unreachable=0 failed=1 > > deploy_stderr: | > >overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.2: > resource_type: OS::Heat::StructuredDeployment > physical_resource_id: 2c100155-e081-4099-8058-5f39135af76a > status: CREATE_FAILED > status_reason: | > Error: resources[2]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 2 > deploy_stdout: | > > PLAY [localhost] *************************************************************** > > TASK [Gathering Facts] ********************************************************* > ok: [localhost] > > TASK [Create /var/lib/tripleo-config directory] ******************************** > changed: [localhost] > > TASK [Check if puppet step_config.pp manifest exists] ************************** > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Write the puppet step_config manifest] *********************************** > changed: [localhost] > > TASK [Create /var/lib/docker-puppet] ******************************************* > changed: [localhost] > > TASK [Check if docker-puppet puppet_config.yaml configuration file exists] ***** > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Write docker-puppet.json file] ******************************************* > changed: [localhost] > > TASK [Create /var/lib/docker-config-scripts] *********************************** > changed: [localhost] > > TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** > ok: [localhost] > > TASK [Check if docker_config_scripts.yaml file exists] ************************* > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Write docker config scripts] ********************************************* > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', u'mode': u'0700'}, 'key': u'nova_api_discover_hosts.sh'}) > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', u'mode': u'0700'}, 'key': u'create_swift_secret.sh'}) > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', u'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', u'mode': u'0700'}, 'key': u'set_swift_keymaster_key_id.sh'}) > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', u'mode': u'0700'}, 'key': u'docker_puppet_apply.sh'}) > changed: [localhost] => (item={'value': {u'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', u'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) > > TASK [Set docker_config_default fact] ****************************************** > ok: [localhost] => (item=None) > ok: [localhost] => (item=None) > ok: [localhost] => (item=None) > ok: [localhost] => (item=None) > ok: [localhost] => (item=None) > ok: [localhost] => (item=None) > ok: [localhost] > > TASK [Check if docker_config.yaml file exists] ********************************* > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Set docker_startup_configs_with_default fact] **************************** > ok: [localhost] > > TASK [Write docker-container-startup-configs] ********************************** > changed: [localhost] > > TASK [Write per-step docker-container-startup-configs] ************************* > changed: [localhost] => (item={'value': {u'cinder_volume_image_tag': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-08-14.4' '192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], u'net': u'host', u'detach': False}, u'mysql_image_tag': {u'start_order': 2, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4' '192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], u'net': u'host', u'detach': False}, u'mysql_data_ownership': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], u'user': u'root', u'volumes': [u'/var/lib/mysql:/var/lib/mysql'], u'net': u'host', u'detach': False}, u'redis_image_tag': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4' '192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], u'net': u'host', u'detach': False}, u'mysql_bootstrap': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=wQHWYDMtN2zP34A7ppnf36KgZ', u'DB_ROOT_PASSWORD=nqmpfBXNCf'], u'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], u'net': u'host', u'detach': False}, u'haproxy_image_tag': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4' '192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], u'net': u'host', u'detach': False}, u'rabbitmq_image_tag': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4' '192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], u'net': u'host', u'detach': False}, u'rabbitmq_bootstrap': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=2vn7bpVGQM3wmDdKDet3'], u'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], u'net': u'host', u'privileged': False}, u'memcached': {u'start_order': 0, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-memcached:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}}, 'key': u'step_1'}) > changed: [localhost] => (item={'value': {u'nova_placement': {u'start_order': 1, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'restart': u'always'}, u'swift_rsync_fix': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'sed -i "/pid file/d" /var/lib/kolla/config_files/src/etc/rsyncd.conf'], u'user': u'root', u'volumes': [u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw'], u'net': u'host', u'detach': False}, u'nova_db_sync': {u'start_order': 3, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], u'net': u'host', u'detach': False}, u'heat_engine_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-engine:2018-08-14.4', u'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], u'net': u'host', u'detach': False, u'privileged': False}, u'swift_copy_rings': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4', u'detach': False, u'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], u'user': u'root', u'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, u'nova_api_ensure_default_cell': {u'start_order': 2, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], u'net': u'host', u'detach': False}, u'keystone_cron': {u'start_order': 4, u'image': u'192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'panko_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-panko-api:2018-08-14.4', u'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], u'net': u'host', u'detach': False, u'privileged': False}, u'nova_api_db_sync': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], u'net': u'host', u'detach': False}, u'iscsid': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-iscsid:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'keystone_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4', u'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'detach': False, u'privileged': False}, u'ceilometer_init_log': {u'start_order': 0, u'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], u'image': u'192.168.24.1:8787/rhosp13/openstack-ceilometer-notification:2018-08-14.4', u'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], u'user': u'root'}, u'keystone': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'aodh_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-08-14.4', u'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], u'net': u'host', u'detach': False, u'privileged': False}, u'cinder_volume_init_logs': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], u'user': u'root', u'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], u'privileged': False}, u'neutron_ovs_bridge': {u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], u'net': u'host', u'detach': False, u'privileged': True}, u'cinder_api_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4', u'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], u'net': u'host', u'detach': False, u'privileged': False}, u'nova_api_map_cell0': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], u'net': u'host', u'detach': False}, u'glance_api_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-glance-api:2018-08-14.4', u'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], u'net': u'host', u'detach': False, u'privileged': False}, u'neutron_db_sync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4', u'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], u'net': u'host', u'detach': False, u'privileged': False}, u'keystone_bootstrap': {u'action': u'exec', u'start_order': 3, u'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'XjxMBFahCQcXFECTsWUkKHBKA'], u'user': u'root'}, u'horizon': {u'image': u'192.168.24.1:8787/rhosp13/openstack-horizon:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_setup_srv': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-account:2018-08-14.4', u'command': [u'chown', u'-R', u'swift:', u'/srv/node'], u'user': u'root', u'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) > changed: [localhost] => (item={'value': {u'gnocchi_init_log': {u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], u'user': u'root', u'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, u'mysql_init_bundle': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], u'net': u'host', u'detach': False}, u'gnocchi_init_lib': {u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], u'user': u'root', u'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, u'cinder_api_init_logs': {u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], u'privileged': False, u'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], u'user': u'root'}, u'create_dnsmasq_wrapper': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-dhcp-agent:2018-08-14.4', u'pid': u'host', u'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], u'net': u'host', u'detach': False}, u'panko_init_log': {u'image': u'192.168.24.1:8787/rhosp13/openstack-panko-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], u'user': u'root', u'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, u'redis_init_bundle': {u'start_order': 2, u'image': u'192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'config_volume': u'redis_init_bundle', u'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], u'net': u'host', u'detach': False}, u'cinder_scheduler_init_logs': {u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-scheduler:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], u'privileged': False, u'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], u'user': u'root'}, u'glance_init_logs': {u'image': u'192.168.24.1:8787/rhosp13/openstack-glance-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], u'privileged': False, u'volumes': [u'/var/log/containers/glance:/var/log/glance'], u'user': u'root'}, u'clustercheck': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], u'net': u'host', u'restart': u'always'}, u'haproxy_init_bundle': {u'start_order': 3, u'image': u'192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], u'net': u'host', u'detach': False, u'privileged': True}, u'neutron_init_logs': {u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], u'privileged': False, u'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], u'user': u'root'}, u'mysql_restart_bundle': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4', u'config_volume': u'mysql', u'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'detach': False}, u'rabbitmq_init_bundle': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], u'net': u'host', u'detach': False}, u'nova_api_init_logs': {u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], u'privileged': False, u'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], u'user': u'root'}, u'haproxy_restart_bundle': {u'start_order': 2, u'image': u'192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4', u'config_volume': u'haproxy', u'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'detach': False}, u'create_keepalived_wrapper': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-l3-agent:2018-08-14.4', u'pid': u'host', u'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], u'net': u'host', u'detach': False}, u'rabbitmq_restart_bundle': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4', u'config_volume': u'rabbitmq', u'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'detach': False}, u'horizon_fix_perms': {u'image': u'192.168.24.1:8787/rhosp13/openstack-horizon:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], u'user': u'root', u'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, u'aodh_init_log': {u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], u'user': u'root', u'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, u'nova_metadata_init_log': {u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], u'privileged': False, u'volumes': [u'/var/log/containers/nova:/var/log/nova'], u'user': u'root'}, u'redis_restart_bundle': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4', u'config_volume': u'redis', u'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'detach': False}, u'heat_init_log': {u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-engine:2018-08-14.4', u'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], u'user': u'root', u'volumes': [u'/var/log/containers/heat:/var/log/heat']}, u'nova_placement_init_log': {u'start_order': 1, u'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-08-14.4', u'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], u'user': u'root'}, u'keystone_init_log': {u'start_order': 1, u'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], u'image': u'192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4', u'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], u'user': u'root'}}, 'key': u'step_2'}) > changed: [localhost] => (item={'value': {u'cinder_volume_init_bundle': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], u'net': u'host', u'detach': False}, u'gnocchi_api': {u'start_order': 1, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'gnocchi_statsd': {u'start_order': 1, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-statsd:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'gnocchi_metricd': {u'start_order': 1, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-metricd:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_api_discover_hosts': {u'start_order': 1, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534431793'], u'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], u'net': u'host', u'detach': False}, u'ceilometer_gnocchi_upgrade': {u'start_order': 99, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-08-14.4', u'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], u'net': u'host', u'detach': False, u'privileged': False}, u'cinder_volume_restart_bundle': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-volume:2018-08-14.4', u'config_volume': u'cinder', u'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'detach': False}, u'gnocchi_db_sync': {u'start_order': 0, u'image': u'192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], u'net': u'host', u'detach': False, u'privileged': False}}, 'key': u'step_5'}) > changed: [localhost] => (item={'value': {u'swift_container_updater': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-container:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'aodh_evaluator': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-evaluator:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_scheduler': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-scheduler:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_object_server': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'cinder_api': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_proxy': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], u'net': u'host', u'restart': u'always'}, u'neutron_dhcp': {u'start_order': 10, u'ulimit': [u'nofile=1024'], u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-dhcp-agent:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'heat_api': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_object_auditor': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'neutron_metadata_agent': {u'start_order': 10, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-metadata-agent:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'ceilometer_agent_central': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'keystone_refresh': {u'action': u'exec', u'start_order': 1, u'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], u'user': u'root'}, u'swift_account_replicator': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-account:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'aodh_notifier': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-notifier:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_api_cron': {u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_consoleauth': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-consoleauth:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'glance_api': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-glance-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_account_reaper': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-account:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'ceilometer_agent_notification': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-ceilometer-notification:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_vnc_proxy': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-novncproxy:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_rsync': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'nova_api': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'aodh_api': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_metadata': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'nova', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'heat_engine': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-engine:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_container_server': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-container:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'swift_object_replicator': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'neutron_l3_agent': {u'start_order': 10, u'ulimit': [u'nofile=1024'], u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-l3-agent:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'cinder_scheduler': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-scheduler:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'nova_conductor': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-nova-conductor:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'heat_api_cfn': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-api-cfn:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'neutron_ovs_agent': {u'start_order': 10, u'ulimit': [u'nofile=1024'], u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-openvswitch-agent:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], u'net': u'host', u'privileged': True, u'restart': u'always'}, u'cinder_api_cron': {u'image': u'192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_account_auditor': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-account:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'swift_container_replicator': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-container:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'swift_object_updater': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-object:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'swift_object_expirer': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'heat_api_cron': {u'image': u'192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_container_auditor': {u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-container:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'panko_api': {u'start_order': 2, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-panko-api:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'aodh_listener': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-aodh-listener:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'neutron_api': {u'start_order': 0, u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], u'net': u'host', u'privileged': False, u'restart': u'always'}, u'swift_account_server': {u'healthcheck': {u'test': u'/openstack/healthcheck'}, u'image': u'192.168.24.1:8787/rhosp13/openstack-swift-account:2018-08-14.4', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'swift', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], u'net': u'host', u'restart': u'always'}, u'logrotate_crond': {u'image': u'192.168.24.1:8787/rhosp13/openstack-cron:2018-08-14.4', u'pid': u'host', u'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], u'user': u'root', u'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], u'net': u'none', u'privileged': True, u'restart': u'always'}}, 'key': u'step_4'}) > changed: [localhost] => (item={'value': {}, 'key': u'step_6'}) > > TASK [Create /var/lib/kolla/config_files directory] **************************** > changed: [localhost] > > TASK [Check if kolla_config.yaml file exists] ********************************** > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Write kolla config json files] ******************************************* > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/keystone.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_replicator.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/nova-scheduler ', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_scheduler.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/crond -n', u'permissions': [{u'owner': u'heat:heat', u'path': u'/var/log/heat', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cron.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_reaper.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_vnc_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_auditor.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_auditor.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src-panko/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', u'permissions': [{u'owner': u'root:ceilometer', u'path': u'/etc/panko', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'heat:heat', u'path': u'/var/log/heat', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_updater.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_replicator.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/neutron_ovs_agent_launcher.sh', u'permissions': [{u'owner': u'neutron:neutron', u'path': u'/var/log/neutron', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_ovs_agent.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/etc/libqb/force-filesystem-sockets', u'source': u'/dev/null', u'owner': u'root', u'perm': u'0644'}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src-tls/*', u'merge': True, u'optional': True, u'preserve_properties': True}], u'command': u'/usr/sbin/pacemaker_remoted', u'permissions': [{u'owner': u'rabbitmq:rabbitmq', u'path': u'/var/lib/rabbitmq', u'recurse': True}, {u'owner': u'rabbitmq:rabbitmq', u'path': u'/var/log/rabbitmq', u'recurse': True}, {u'owner': u'rabbitmq:rabbitmq', u'path': u'/etc/pki/tls/certs/rabbitmq.crt', u'optional': True, u'perm': u'0600'}, {u'owner': u'rabbitmq:rabbitmq', u'path': u'/etc/pki/tls/private/rabbitmq.key', u'optional': True, u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/rabbitmq.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', u'permissions': [{u'owner': u'cinder:cinder', u'path': u'/var/log/cinder', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_scheduler.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/gnocchi-metricd', u'permissions': [{u'owner': u'gnocchi:gnocchi', u'path': u'/var/log/gnocchi', u'recurse': True}, {u'owner': u'gnocchi:gnocchi', u'path': u'/etc/ceph/ceph.client.openstack.keyring', u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_metricd.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_replicator.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', u'permissions': [{u'owner': u'heat:heat', u'path': u'/var/log/heat', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_engine.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', u'permissions': [{u'owner': u'swift:swift', u'path': u'/var/cache/swift', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/swift_object_server.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': u'/var/lib/kolla/config_files/redis_tls_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'gnocchi:gnocchi', u'path': u'/var/log/gnocchi', u'recurse': True}, {u'owner': u'gnocchi:gnocchi', u'path': u'/etc/ceph/ceph.client.openstack.keyring', u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/iscsi/', u'source': u'/var/lib/kolla/config_files/src-iscsid/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', u'permissions': [{u'owner': u'cinder:cinder', u'path': u'/var/log/cinder', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_volume.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'panko:panko', u'path': u'/var/log/panko', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/panko_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_auditor.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', u'permissions': [{u'owner': u'neutron:neutron', u'path': u'/var/log/neutron', u'recurse': True}, {u'owner': u'neutron:neutron', u'path': u'/var/lib/neutron', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_l3_agent.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/aodh-listener', u'permissions': [{u'owner': u'aodh:aodh', u'path': u'/var/log/aodh', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_listener.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_server.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'apache:apache', u'path': u'/var/log/horizon/', u'recurse': True}, {u'owner': u'apache:apache', u'path': u'/etc/openstack-dashboard/', u'recurse': True}, {u'owner': u'apache:apache', u'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', u'recurse': False}, {u'owner': u'apache:apache', u'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', u'recurse': False}]}, 'key': u'/var/lib/kolla/config_files/horizon.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', u'permissions': [{u'owner': u'neutron:neutron', u'path': u'/var/log/neutron', u'recurse': True}, {u'owner': u'neutron:neutron', u'path': u'/var/lib/neutron', u'recurse': True}, {u'owner': u'neutron:neutron', u'path': u'/etc/pki/tls/certs/neutron.crt'}, {u'owner': u'neutron:neutron', u'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': u'/var/lib/kolla/config_files/neutron_dhcp.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', u'permissions': [{u'owner': u'glance:glance', u'path': u'/var/lib/glance', u'recurse': True}, {u'owner': u'glance:glance', u'path': u'/etc/ceph/ceph.client.openstack.keyring', u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/glance_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/etc/libqb/force-filesystem-sockets', u'source': u'/dev/null', u'owner': u'root', u'perm': u'0644'}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src-tls/*', u'merge': True, u'optional': True, u'preserve_properties': True}], u'command': u'/usr/sbin/pacemaker_remoted', u'permissions': [{u'owner': u'mysql:mysql', u'path': u'/var/log/mysql', u'recurse': True}, {u'owner': u'mysql:mysql', u'path': u'/etc/pki/tls/certs/mysql.crt', u'optional': True, u'perm': u'0600'}, {u'owner': u'mysql:mysql', u'path': u'/etc/pki/tls/private/mysql.key', u'optional': True, u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/mysql.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/crond -n', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api_cron.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', u'permissions': [{u'owner': u'gnocchi:gnocchi', u'path': u'/var/log/gnocchi', u'recurse': True}, {u'owner': u'gnocchi:gnocchi', u'path': u'/etc/ceph/ceph.client.openstack.keyring', u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_db_sync.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_placement.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/nova-api-metadata ', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_metadata.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/nova-consoleauth ', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_consoleauth.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_central.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', u'permissions': [{u'owner': u'neutron:neutron', u'path': u'/var/log/neutron', u'recurse': True}, {u'owner': u'neutron:neutron', u'path': u'/var/lib/neutron', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_metadata_agent.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': u'/var/lib/kolla/config_files/swift_rsync.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_server.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/crond -n', u'permissions': [{u'owner': u'cinder:cinder', u'path': u'/var/log/cinder', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api_cron.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'optional': True, u'preserve_properties': True}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src-tls/*', u'merge': True, u'optional': True, u'preserve_properties': True}], u'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', u'permissions': [{u'owner': u'haproxy:haproxy', u'path': u'/var/lib/haproxy', u'recurse': True}, {u'owner': u'haproxy:haproxy', u'path': u'/etc/pki/tls/certs/haproxy/*', u'optional': True, u'perm': u'0600'}, {u'owner': u'haproxy:haproxy', u'path': u'/etc/pki/tls/private/haproxy/*', u'optional': True, u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/haproxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/aodh-notifier', u'permissions': [{u'owner': u'aodh:aodh', u'path': u'/var/log/aodh', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_notifier.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'aodh:aodh', u'path': u'/var/log/aodh', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/crond -n', u'permissions': [{u'owner': u'keystone:keystone', u'path': u'/var/log/keystone', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/keystone_cron.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'heat:heat', u'path': u'/var/log/heat', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cfn.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/nova-conductor ', u'permissions': [{u'owner': u'nova:nova', u'path': u'/var/log/nova', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_conductor.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/etc/iscsi/', u'source': u'/var/lib/kolla/config_files/src-iscsid/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/iscsid -f'}, 'key': u'/var/lib/kolla/config_files/iscsid.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/etc/libqb/force-filesystem-sockets', u'source': u'/dev/null', u'owner': u'root', u'perm': u'0644'}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'optional': True, u'preserve_properties': True}, {u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src-tls/*', u'merge': True, u'optional': True, u'preserve_properties': True}], u'command': u'/usr/sbin/pacemaker_remoted', u'permissions': [{u'owner': u'redis:redis', u'path': u'/var/run/redis', u'recurse': True}, {u'owner': u'redis:redis', u'path': u'/var/lib/redis', u'recurse': True}, {u'owner': u'redis:redis', u'path': u'/var/log/redis', u'recurse': True}, {u'owner': u'redis:redis', u'path': u'/etc/pki/tls/certs/redis.crt', u'optional': True, u'perm': u'0600'}, {u'owner': u'redis:redis', u'path': u'/etc/pki/tls/private/redis.key', u'optional': True, u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/redis.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_expirer.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', u'permissions': [{u'owner': u'neutron:neutron', u'path': u'/var/log/neutron', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/httpd -DFOREGROUND', u'permissions': [{u'owner': u'cinder:cinder', u'path': u'/var/log/cinder', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/xinetd -dontfork'}, 'key': u'/var/lib/kolla/config_files/clustercheck.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/aodh-evaluator', u'permissions': [{u'owner': u'aodh:aodh', u'path': u'/var/log/aodh', u'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_evaluator.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_updater.json'}) > changed: [localhost] => (item={'value': {u'config_files': [{u'dest': u'/', u'source': u'/var/lib/kolla/config_files/src/*', u'merge': True, u'preserve_properties': True}, {u'dest': u'/etc/ceph/', u'source': u'/var/lib/kolla/config_files/src-ceph/', u'merge': True, u'preserve_properties': True}], u'command': u'/usr/bin/gnocchi-statsd', u'permissions': [{u'owner': u'gnocchi:gnocchi', u'path': u'/var/log/gnocchi', u'recurse': True}, {u'owner': u'gnocchi:gnocchi', u'path': u'/etc/ceph/ceph.client.openstack.keyring', u'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_statsd.json'}) > > TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ > > TASK [Check if docker_puppet_tasks.yaml file exists] *************************** > ok: [localhost -> localhost] > > TASK [Set fact when file existed] ********************************************** > skipping: [localhost] > > TASK [Write docker-puppet-tasks json files] ************************************ > skipping: [localhost] => (item={'value': [{u'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', u'config_volume': u'keystone_init_tasks', u'step_config': u'include ::tripleo::profile::base::keystone', u'config_image': u'192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4'}], 'key': u'step_3'}) > > TASK [Set host puppet debugging fact string] *********************************** > ok: [localhost] > > TASK [Write the config_step hieradata] ***************************************** > changed: [localhost] > > TASK [Run puppet host configuration for step 1] ******************************** > changed: [localhost] > > TASK [Debug output for task which failed: Run puppet host configuration for step 1] *** > ok: [localhost] => { > "failed_when_result": false, > "outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": [ > "Debug: Runtime environment: puppet_version=4.8.2, ruby_version=2.0.0, run_mode=user, default_encoding=UTF-8", > "Debug: Evicting cache entry for environment 'production'", > "Debug: Caching environment 'production' (ttl = 0 sec)", > "Debug: Loading external facts from /etc/puppet/modules/openstacklib/facts.d", > "Debug: Loading external facts from /var/lib/puppet/facts.d", > "Info: Loading facts", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /etc/puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /etc/puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /etc/puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /etc/puppet/modules/pacemaker/lib/facter/pcmk_is_remote.rb", > "Debug: Loading facts from /etc/puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /etc/puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /etc/puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /etc/puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/pacemaker/lib/facter/pcmk_is_remote.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Facter: Found no suitable resolves of 1 for ec2_metadata", > "Debug: Facter: value for ec2_metadata is still nil", > "Debug: Failed to load library 'cfpropertylist' for feature 'cfpropertylist'", > "Debug: Executing: '/usr/bin/rpm --version'", > "Debug: Executing: '/usr/bin/rpm -ql rpm'", > "Debug: Facter: value for agent_specified_environment is still nil", > "Debug: Facter: Found no suitable resolves of 1 for system32", > "Debug: Facter: value for system32 is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistid", > "Debug: Facter: value for lsbdistid is still nil", > "Debug: Facter: value for ipaddress6 is still nil", > "Debug: Facter: value for network_br_isolated is still nil", > "Debug: Facter: value for network_eth1 is still nil", > "Debug: Facter: value for network_eth2 is still nil", > "Debug: Facter: value for network_ovs_system is still nil", > "Debug: Facter: value for vlans is still nil", > "Debug: Facter: value for is_rsc is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_region", > "Debug: Facter: value for rsc_region is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_instance_id", > "Debug: Facter: value for rsc_instance_id is still nil", > "Debug: Facter: value for cfkey is still nil", > "Debug: Facter: Found no suitable resolves of 1 for processor", > "Debug: Facter: value for processor is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbminordistrelease", > "Debug: Facter: value for lsbminordistrelease is still nil", > "Debug: Facter: value for ipaddress6_br_ex is still nil", > "Debug: Facter: value for ipaddress_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_br_isolated is still nil", > "Debug: Facter: value for netmask_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_eth0 is still nil", > "Debug: Facter: value for ipaddress_eth1 is still nil", > "Debug: Facter: value for ipaddress6_eth1 is still nil", > "Debug: Facter: value for netmask_eth1 is still nil", > "Debug: Facter: value for ipaddress_eth2 is still nil", > "Debug: Facter: value for ipaddress6_eth2 is still nil", > "Debug: Facter: value for netmask_eth2 is still nil", > "Debug: Facter: value for ipaddress6_lo is still nil", > "Debug: Facter: value for macaddress_lo is still nil", > "Debug: Facter: value for ipaddress_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_ovs_system is still nil", > "Debug: Facter: value for netmask_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_vlan20 is still nil", > "Debug: Facter: value for ipaddress6_vlan30 is still nil", > "Debug: Facter: value for ipaddress6_vlan40 is still nil", > "Debug: Facter: value for ipaddress6_vlan50 is still nil", > "Debug: Facter: Found no suitable resolves of 1 for zonename", > "Debug: Facter: value for zonename is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbrelease", > "Debug: Facter: value for lsbrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbmajdistrelease", > "Debug: Facter: value for lsbmajdistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistcodename", > "Debug: Facter: value for lsbdistcodename is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistdescription", > "Debug: Facter: value for lsbdistdescription is still nil", > "Debug: Facter: Found no suitable resolves of 1 for xendomains", > "Debug: Facter: value for xendomains is still nil", > "Debug: Facter: Found no suitable resolves of 2 for swapencrypted", > "Debug: Facter: value for swapencrypted is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistrelease", > "Debug: Facter: value for lsbdistrelease is still nil", > "Debug: Facter: value for zpool_version is still nil", > "Debug: Facter: value for sshdsakey is still nil", > "Debug: Facter: value for sshfp_dsa is still nil", > "Debug: Facter: value for dhcp_servers is still nil", > "Debug: Facter: Found no suitable resolves of 1 for gce", > "Debug: Facter: value for gce is still nil", > "Debug: Facter: value for zfs_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iphostnumber", > "Debug: Facter: value for iphostnumber is still nil", > "Debug: Facter: value for rabbitmq_version is still nil", > "Debug: Facter: value for erl_ssl_path is still nil", > "Debug: Facter: Matching apachectl 'Server version: Apache/2.4.6 (Red Hat Enterprise Linux)", > "Server built: May 28 2018 16:19:32'", > "Debug: Facter: value for java_version is still nil", > "Debug: Facter: value for java_major_version is still nil", > "Debug: Facter: value for java_patch_level is still nil", > "Debug: Facter: value for java_default_home is still nil", > "Debug: Facter: value for java_libjvm_path is still nil", > "Debug: Facter: value for pe_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_major_version", > "Debug: Facter: value for pe_major_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_minor_version", > "Debug: Facter: value for pe_minor_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_patch_version", > "Debug: Facter: value for pe_patch_version is still nil", > "Debug: Puppet::Type::Service::ProviderNoop: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderOpenrc: file /bin/rc-status does not exist", > "Debug: Puppet::Type::Service::ProviderInit: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderLaunchd: file /bin/launchctl does not exist", > "Debug: Puppet::Type::Service::ProviderDebian: file /usr/sbin/update-rc.d does not exist", > "Debug: Puppet::Type::Service::ProviderUpstart: 0 confines (of 4) were true", > "Debug: Puppet::Type::Service::ProviderDaemontools: file /usr/bin/svc does not exist", > "Debug: Puppet::Type::Service::ProviderRunit: file /usr/bin/sv does not exist", > "Debug: Puppet::Type::Service::ProviderGentoo: file /sbin/rc-update does not exist", > "Debug: Puppet::Type::Service::ProviderOpenbsd: file /usr/sbin/rcctl does not exist", > "Debug: Puppet::Type::Package::ProviderSensu_gem: file /opt/sensu/embedded/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderTdagent: file /opt/td-agent/usr/sbin/td-agent-gem does not exist", > "Debug: Puppet::Type::Package::ProviderDpkg: file /usr/bin/dpkg does not exist", > "Debug: Puppet::Type::Package::ProviderFink: file /sw/bin/fink does not exist", > "Debug: Puppet::Type::Package::ProviderUp2date: file /usr/sbin/up2date-nox does not exist", > "Debug: Puppet::Type::Package::ProviderPacman: file /usr/bin/pacman does not exist", > "Debug: Puppet::Type::Package::ProviderApt: file /usr/bin/apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderAptitude: file /usr/bin/aptitude does not exist", > "Debug: Puppet::Type::Package::ProviderSun: file /usr/bin/pkginfo does not exist", > "Debug: Puppet::Type::Package::ProviderUrpmi: file urpmi does not exist", > "Debug: Puppet::Type::Package::ProviderSunfreeware: file pkg-get does not exist", > "Debug: Puppet::Type::Package::ProviderOpkg: file opkg does not exist", > "Debug: Puppet::Type::Package::ProviderPuppet_gem: file /opt/puppetlabs/puppet/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderDnf: file dnf does not exist", > "Debug: Puppet::Type::Package::ProviderOpenbsd: file pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderFreebsd: file /usr/sbin/pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderAix: file /usr/bin/lslpp does not exist", > "Debug: Puppet::Type::Package::ProviderNim: file /usr/sbin/nimclient does not exist", > "Debug: Puppet::Type::Package::ProviderPkgin: file pkgin does not exist", > "Debug: Puppet::Type::Package::ProviderZypper: file /usr/bin/zypper does not exist", > "Debug: Puppet::Type::Package::ProviderPortage: file /usr/bin/emerge does not exist", > "Debug: Puppet::Type::Package::ProviderAptrpm: file apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderPkg: file /usr/bin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderHpux: file /usr/sbin/swinstall does not exist", > "Debug: Puppet::Type::Package::ProviderPortupgrade: file /usr/local/sbin/portupgrade does not exist", > "Debug: Puppet::Type::Package::ProviderPkgng: file /usr/local/sbin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderTdnf: file tdnf does not exist", > "Debug: Puppet::Type::Package::ProviderRug: file /usr/bin/rug does not exist", > "Debug: Puppet::Type::Package::ProviderPorts: file /usr/local/sbin/portupgrade does not exist", > "Debug: Facter: value for cassandrarelease is still nil", > "Debug: Facter: value for cassandrapatchversion is still nil", > "Debug: Facter: value for cassandraminorversion is still nil", > "Debug: Facter: value for cassandramajorversion is still nil", > "Debug: Facter: value for mysqld_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for staging_windir", > "Debug: Facter: value for staging_windir is still nil", > "Debug: Facter: Found no suitable resolves of 2 for archive_windir", > "Debug: Facter: value for archive_windir is still nil", > "Debug: Facter: value for netmask6_ovs_system is still nil", > "Debug: Facter: value for libvirt_uuid is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iptables_persistent_version", > "Debug: Facter: value for iptables_persistent_version is still nil", > "Debug: hiera(): Hiera JSON backend starting", > "Debug: hiera(): Looking up step in JSON backend", > "Debug: hiera(): Looking for data source E9CEDCF6-DB3C-4BAB-A327-FAF47AA9C70F", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/E9CEDCF6-DB3C-4BAB-A327-FAF47AA9C70F.json, skipping", > "Debug: hiera(): Looking for data source heat_config_", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/heat_config_.json, skipping", > "Debug: hiera(): Looking for data source config_step", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/trusted_cas.pp' in environment production", > "Debug: Automatically imported tripleo::trusted_cas from tripleo/trusted_cas into production", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Debug: hiera(): Looking up lookup_options in JSON backend", > "Debug: hiera(): Looking for data source controller_extraconfig", > "Debug: hiera(): Looking for data source extraconfig", > "Debug: hiera(): Looking for data source service_names", > "Debug: hiera(): Looking for data source service_configs", > "Debug: hiera(): Looking for data source controller", > "Debug: hiera(): Looking for data source bootstrap_node", > "Debug: hiera(): Looking for data source all_nodes", > "Debug: hiera(): Looking for data source vip_data", > "Debug: hiera(): Looking for data source net_ip_map", > "Debug: hiera(): Looking for data source RedHat", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/RedHat.json, skipping", > "Debug: hiera(): Looking for data source neutron_bigswitch_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_bigswitch_data.json, skipping", > "Debug: hiera(): Looking for data source neutron_cisco_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_cisco_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_n1kv_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_n1kv_data.json, skipping", > "Debug: hiera(): Looking for data source midonet_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/midonet_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_aci_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_aci_data.json, skipping", > "Debug: hiera(): Looking up tripleo::trusted_cas::ca_map in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/docker.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::docker from tripleo/profile/base/docker into production", > "Debug: hiera(): Looking up tripleo::profile::base::docker::insecure_registries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::registry_mirror in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::docker_options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::additional_sockets in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::configure_network in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::network_options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::configure_storage in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::storage_options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::debug in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::deployment_user in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::insecure_registry_address in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::docker_namespace in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::docker::insecure_registry in JSON backend", > "Debug: hiera(): Looking up deployment_user in JSON backend", > "Debug: importing '/etc/puppet/modules/sysctl/manifests/value.pp' in environment production", > "Debug: Automatically imported sysctl::value from sysctl/value into production", > "Debug: Resource group[docker] was not determined to be defined", > "Debug: Create new resource group[docker] with params {\"ensure\"=>\"present\"}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/kernel.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::kernel from tripleo/profile/base/kernel into production", > "Debug: hiera(): Looking up tripleo::profile::base::kernel::module_list in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::kernel::sysctl_settings in JSON backend", > "Debug: hiera(): Looking up kernel_modules in JSON backend", > "Debug: hiera(): Looking up sysctl_settings in JSON backend", > "Debug: importing '/etc/puppet/modules/kmod/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/kmod/manifests/load.pp' in environment production", > "Debug: Automatically imported kmod::load from kmod/load into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::database::mysql::client from tripleo/profile/base/database/mysql/client into production", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::enable_ssl in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::mysql_read_default_file in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::mysql_read_default_group in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::mysql_client_bind_address in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::ssl_ca in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::client::step in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::time::ntp from tripleo/profile/base/time/ntp into production", > "Debug: importing '/etc/puppet/modules/ntp/manifests/init.pp' in environment production", > "Debug: Automatically imported ntp from ntp into production", > "Debug: importing '/etc/puppet/modules/ntp/manifests/params.pp' in environment production", > "Debug: Automatically imported ntp::params from ntp/params into production", > "Debug: hiera(): Looking up ntp::autoupdate in JSON backend", > "Debug: hiera(): Looking up ntp::broadcastclient in JSON backend", > "Debug: hiera(): Looking up ntp::config in JSON backend", > "Debug: hiera(): Looking up ntp::config_dir in JSON backend", > "Debug: hiera(): Looking up ntp::config_file_mode in JSON backend", > "Debug: hiera(): Looking up ntp::config_template in JSON backend", > "Debug: hiera(): Looking up ntp::disable_auth in JSON backend", > "Debug: hiera(): Looking up ntp::disable_dhclient in JSON backend", > "Debug: hiera(): Looking up ntp::disable_kernel in JSON backend", > "Debug: hiera(): Looking up ntp::disable_monitor in JSON backend", > "Debug: hiera(): Looking up ntp::fudge in JSON backend", > "Debug: hiera(): Looking up ntp::driftfile in JSON backend", > "Debug: hiera(): Looking up ntp::leapfile in JSON backend", > "Debug: hiera(): Looking up ntp::logfile in JSON backend", > "Debug: hiera(): Looking up ntp::iburst_enable in JSON backend", > "Debug: hiera(): Looking up ntp::keys in JSON backend", > "Debug: hiera(): Looking up ntp::keys_enable in JSON backend", > "Debug: hiera(): Looking up ntp::keys_file in JSON backend", > "Debug: hiera(): Looking up ntp::keys_controlkey in JSON backend", > "Debug: hiera(): Looking up ntp::keys_requestkey in JSON backend", > "Debug: hiera(): Looking up ntp::keys_trusted in JSON backend", > "Debug: hiera(): Looking up ntp::minpoll in JSON backend", > "Debug: hiera(): Looking up ntp::maxpoll in JSON backend", > "Debug: hiera(): Looking up ntp::package_ensure in JSON backend", > "Debug: hiera(): Looking up ntp::package_manage in JSON backend", > "Debug: hiera(): Looking up ntp::package_name in JSON backend", > "Debug: hiera(): Looking up ntp::panic in JSON backend", > "Debug: hiera(): Looking up ntp::peers in JSON backend", > "Debug: hiera(): Looking up ntp::preferred_servers in JSON backend", > "Debug: hiera(): Looking up ntp::restrict in JSON backend", > "Debug: hiera(): Looking up ntp::interfaces in JSON backend", > "Debug: hiera(): Looking up ntp::interfaces_ignore in JSON backend", > "Debug: hiera(): Looking up ntp::servers in JSON backend", > "Debug: hiera(): Looking up ntp::service_enable in JSON backend", > "Debug: hiera(): Looking up ntp::service_ensure in JSON backend", > "Debug: hiera(): Looking up ntp::service_manage in JSON backend", > "Debug: hiera(): Looking up ntp::service_name in JSON backend", > "Debug: hiera(): Looking up ntp::service_provider in JSON backend", > "Debug: hiera(): Looking up ntp::stepout in JSON backend", > "Debug: hiera(): Looking up ntp::tinker in JSON backend", > "Debug: hiera(): Looking up ntp::tos in JSON backend", > "Debug: hiera(): Looking up ntp::tos_minclock in JSON backend", > "Debug: hiera(): Looking up ntp::tos_minsane in JSON backend", > "Debug: hiera(): Looking up ntp::tos_floor in JSON backend", > "Debug: hiera(): Looking up ntp::tos_ceiling in JSON backend", > "Debug: hiera(): Looking up ntp::tos_cohort in JSON backend", > "Debug: hiera(): Looking up ntp::udlc in JSON backend", > "Debug: hiera(): Looking up ntp::udlc_stratum in JSON backend", > "Debug: hiera(): Looking up ntp::ntpsigndsocket in JSON backend", > "Debug: hiera(): Looking up ntp::authprov in JSON backend", > "Debug: importing '/etc/puppet/modules/ntp/manifests/install.pp' in environment production", > "Debug: Automatically imported ntp::install from ntp/install into production", > "Debug: importing '/etc/puppet/modules/ntp/manifests/config.pp' in environment production", > "Debug: Automatically imported ntp::config from ntp/config into production", > "Debug: Scope(Class[Ntp::Config]): Retrieving template ntp/ntp.conf.erb", > "Debug: template[/etc/puppet/modules/ntp/templates/ntp.conf.erb]: Bound template variables for /etc/puppet/modules/ntp/templates/ntp.conf.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/ntp/templates/ntp.conf.erb]: Interpolated template /etc/puppet/modules/ntp/templates/ntp.conf.erb in 0.00 seconds", > "Debug: importing '/etc/puppet/modules/ntp/manifests/service.pp' in environment production", > "Debug: Automatically imported ntp::service from ntp/service into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/pacemaker.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::pacemaker from tripleo/profile/base/pacemaker into production", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_node_ips in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_authkey in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::encryption in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::enable_instanceha in JSON backend", > "Debug: hiera(): Looking up pcs_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_node_ips in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker_cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::instanceha in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up enable_fencing in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_node_names in JSON backend", > "Debug: hiera(): Looking up corosync_ipv6 in JSON backend", > "Debug: hiera(): Looking up corosync_token_timeout in JSON backend", > "Debug: hiera(): Looking up hacluster_pwd in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/init.pp' in environment production", > "Debug: Automatically imported pacemaker from pacemaker into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/params.pp' in environment production", > "Debug: Automatically imported pacemaker::params from pacemaker/params into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/install.pp' in environment production", > "Debug: Automatically imported pacemaker::install from pacemaker/install into production", > "Debug: hiera(): Looking up pacemaker::install::ensure in JSON backend", > "Debug: Resource package[pacemaker] was not determined to be defined", > "Debug: Create new resource package[pacemaker] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pcs] was not determined to be defined", > "Debug: Create new resource package[pcs] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[fence-agents-all] was not determined to be defined", > "Debug: Create new resource package[fence-agents-all] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pacemaker-libs] was not determined to be defined", > "Debug: Create new resource package[pacemaker-libs] with params {\"ensure\"=>\"present\"}", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/service.pp' in environment production", > "Debug: Automatically imported pacemaker::service from pacemaker/service into production", > "Debug: hiera(): Looking up pacemaker::service::ensure in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasstatus in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasrestart in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/corosync.pp' in environment production", > "Debug: Automatically imported pacemaker::corosync from pacemaker/corosync into production", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_members_rrp in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_name in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::manage_fw in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::pcsd_debug in JSON backend", > "Debug: hiera(): Looking up docker_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/systemd/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/systemctl/daemon_reload.pp' in environment production", > "Debug: Automatically imported systemd::systemctl::daemon_reload from systemd/systemctl/daemon_reload into production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/unit_file.pp' in environment production", > "Debug: importing '/etc/puppet/modules/stdlib/manifests/init.pp' in environment production", > "Debug: Automatically imported systemd::unit_file from systemd/unit_file into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/snmp.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::snmp from tripleo/profile/base/snmp into production", > "Debug: hiera(): Looking up tripleo::profile::base::snmp::snmpd_config in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::snmp::snmpd_password in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::snmp::snmpd_user in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::snmp::step in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/sshd.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::sshd from tripleo/profile/base/sshd into production", > "Debug: hiera(): Looking up tripleo::profile::base::sshd::bannertext in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::sshd::motd in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::sshd::options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::sshd::port in JSON backend", > "Debug: hiera(): Looking up ssh:server::options in JSON backend", > "Debug: importing '/etc/puppet/modules/ssh/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/ssh/manifests/server.pp' in environment production", > "Debug: Automatically imported ssh::server from ssh/server into production", > "Debug: importing '/etc/puppet/modules/ssh/manifests/params.pp' in environment production", > "Debug: Automatically imported ssh::params from ssh/params into production", > "Debug: hiera(): Looking up ssh::server::ensure in JSON backend", > "Debug: hiera(): Looking up ssh::server::validate_sshd_file in JSON backend", > "Debug: hiera(): Looking up ssh::server::use_augeas in JSON backend", > "Debug: hiera(): Looking up ssh::server::options_absent in JSON backend", > "Debug: hiera(): Looking up ssh::server::match_block in JSON backend", > "Debug: hiera(): Looking up ssh::server::use_issue_net in JSON backend", > "Debug: hiera(): Looking up ssh::server::options in JSON backend", > "Debug: importing '/etc/puppet/modules/ssh/manifests/server/install.pp' in environment production", > "Debug: Automatically imported ssh::server::install from ssh/server/install into production", > "Debug: importing '/etc/puppet/modules/ssh/manifests/server/config.pp' in environment production", > "Debug: Automatically imported ssh::server::config from ssh/server/config into production", > "Debug: importing '/etc/puppet/modules/concat/manifests/init.pp' in environment production", > "Debug: Automatically imported concat from concat into production", > "Debug: Scope(Class[Ssh::Server::Config]): Retrieving template ssh/sshd_config.erb", > "Debug: template[/etc/puppet/modules/ssh/templates/sshd_config.erb]: Bound template variables for /etc/puppet/modules/ssh/templates/sshd_config.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/ssh/templates/sshd_config.erb]: Interpolated template /etc/puppet/modules/ssh/templates/sshd_config.erb in 0.00 seconds", > "Debug: importing '/etc/puppet/modules/concat/manifests/fragment.pp' in environment production", > "Debug: Automatically imported concat::fragment from concat/fragment into production", > "Debug: importing '/etc/puppet/modules/ssh/manifests/server/service.pp' in environment production", > "Debug: Automatically imported ssh::server::service from ssh/server/service into production", > "Debug: hiera(): Looking up ssh::server::service::ensure in JSON backend", > "Debug: hiera(): Looking up ssh::server::service::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/timezone/manifests/init.pp' in environment production", > "Debug: Automatically imported timezone from timezone into production", > "Debug: hiera(): Looking up timezone::timezone in JSON backend", > "Debug: hiera(): Looking up timezone::ensure in JSON backend", > "Debug: hiera(): Looking up timezone::hwutc in JSON backend", > "Debug: hiera(): Looking up timezone::autoupgrade in JSON backend", > "Debug: hiera(): Looking up timezone::notify_services in JSON backend", > "Debug: hiera(): Looking up timezone::package in JSON backend", > "Debug: hiera(): Looking up timezone::zoneinfo_dir in JSON backend", > "Debug: hiera(): Looking up timezone::localtime_file in JSON backend", > "Debug: hiera(): Looking up timezone::timezone_file in JSON backend", > "Debug: hiera(): Looking up timezone::timezone_file_template in JSON backend", > "Debug: hiera(): Looking up timezone::timezone_file_supports_comment in JSON backend", > "Debug: hiera(): Looking up timezone::timezone_update in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall.pp' in environment production", > "Debug: Automatically imported tripleo::firewall from tripleo/firewall into production", > "Debug: hiera(): Looking up tripleo::firewall::manage_firewall in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_chains in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::purge_firewall_chains in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::purge_firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_pre_extras in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_post_extras in JSON backend", > "Debug: Resource class[tripleo::firewall::pre] was not determined to be defined", > "Debug: Create new resource class[tripleo::firewall::pre] with params {\"firewall_settings\"=>{}}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/pre.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::pre from tripleo/firewall/pre into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/init.pp' in environment production", > "Debug: Automatically imported firewall from firewall into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/params.pp' in environment production", > "Debug: Automatically imported firewall::params from firewall/params into production", > "Debug: hiera(): Looking up firewall::ensure in JSON backend", > "Debug: hiera(): Looking up firewall::ensure_v6 in JSON backend", > "Debug: hiera(): Looking up firewall::pkg_ensure in JSON backend", > "Debug: hiera(): Looking up firewall::service_name in JSON backend", > "Debug: hiera(): Looking up firewall::service_name_v6 in JSON backend", > "Debug: hiera(): Looking up firewall::package_name in JSON backend", > "Debug: hiera(): Looking up firewall::ebtables_manage in JSON backend", > "Debug: importing '/etc/puppet/modules/firewall/manifests/linux.pp' in environment production", > "Debug: Automatically imported firewall::linux from firewall/linux into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/linux/redhat.pp' in environment production", > "Debug: Automatically imported firewall::linux::redhat from firewall/linux/redhat into production", > "Debug: hiera(): Looking up firewall::linux::redhat::package_ensure in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/rule.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::rule from tripleo/firewall/rule into production", > "Debug: Resource class[tripleo::firewall::post] was not determined to be defined", > "Debug: Create new resource class[tripleo::firewall::post] with params {\"firewall_settings\"=>{}}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/post.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::post from tripleo/firewall/post into production", > "Debug: hiera(): Looking up tripleo::firewall::post::debug in JSON backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Debug: hiera(): Looking up service_names in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/service_rules.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::service_rules from tripleo/firewall/service_rules into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/packages.pp' in environment production", > "Debug: Automatically imported tripleo::packages from tripleo/packages into production", > "Debug: hiera(): Looking up tripleo::packages::enable_install in JSON backend", > "Debug: hiera(): Looking up tripleo::packages::enable_upgrade in JSON backend", > "Debug: importing '/etc/puppet/modules/stdlib/manifests/stages.pp' in environment production", > "Debug: Automatically imported stdlib::stages from stdlib/stages into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/tuned.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::tuned from tripleo/profile/base/tuned into production", > "Debug: hiera(): Looking up tripleo::profile::base::tuned::profile in JSON backend", > "Debug: Resource package[tuned] was not determined to be defined", > "Debug: Create new resource package[tuned] with params {\"ensure\"=>\"present\"}", > "Debug: Scope(Kmod::Load[nf_conntrack]): Retrieving template kmod/redhat.modprobe.erb", > "Debug: template[/etc/puppet/modules/kmod/templates/redhat.modprobe.erb]: Bound template variables for /etc/puppet/modules/kmod/templates/redhat.modprobe.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/kmod/templates/redhat.modprobe.erb]: Interpolated template /etc/puppet/modules/kmod/templates/redhat.modprobe.erb in 0.00 seconds", > "Debug: Scope(Kmod::Load[nf_conntrack_proto_sctp]): Retrieving template kmod/redhat.modprobe.erb", > "Debug: importing '/etc/puppet/modules/sysctl/manifests/base.pp' in environment production", > "Debug: Automatically imported sysctl::base from sysctl/base into production", > "Debug: template[inline]: Bound template variables for inline template in 0.00 seconds", > "Debug: template[inline]: Interpolated template inline template in 0.00 seconds", > "Debug: hiera(): Looking up systemd::service_limits in JSON backend", > "Debug: hiera(): Looking up systemd::manage_resolved in JSON backend", > "Debug: hiera(): Looking up systemd::resolved_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_networkd in JSON backend", > "Debug: hiera(): Looking up systemd::networkd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_timesyncd in JSON backend", > "Debug: hiera(): Looking up systemd::timesyncd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::ntp_server in JSON backend", > "Debug: hiera(): Looking up systemd::fallback_ntp_server in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_evaluator.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_evaluator::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_listener.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_listener::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_notifier.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_notifier::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ca_certs.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ca_certs::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_api_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_api_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_collector_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_collector_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_expirer_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_expirer_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_central.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_central::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_notification.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_notification::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mgr.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mgr::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mon.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mon::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_scheduler.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_scheduler::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_volume.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_volume::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.clustercheck.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::clustercheck::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.docker.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::docker::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_registry_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_registry_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_metricd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_metricd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_statsd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_statsd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.haproxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cloudwatch_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cloudwatch_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cfn.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cfn::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_engine.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_engine::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.horizon.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::horizon::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.iscsid.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::iscsid::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.kernel.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::kernel::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.keystone.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::keystone::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.memcached.firewall_rules in JSON backend", > "Debug: hiera(): Looking up memcached_network in JSON backend", > "Debug: hiera(): Looking up tripleo::memcached::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mongodb_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mongodb_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql_client.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql_client::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_plugin_ml2.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_plugin_ml2::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_dhcp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_dhcp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_l3.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_l3::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_metadata.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_metadata::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_ovs_agent.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_ovs_agent::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_conductor.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_conductor::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_consoleauth.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_consoleauth::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_metadata.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_metadata::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_placement.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_placement::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_scheduler.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_scheduler::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_vnc_proxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_vnc_proxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ntp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ntp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.logrotate_crond.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::logrotate_crond::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.pacemaker.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::pacemaker::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.panko_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::panko_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.rabbitmq.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::rabbitmq::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.redis.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::redis::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.snmp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up snmpd_network in JSON backend", > "Debug: hiera(): Looking up tripleo::snmp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.sshd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::sshd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_proxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_proxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_ringbuilder.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_ringbuilder::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_storage.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_storage::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.timezone.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::timezone::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_firewall.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_firewall::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_packages.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_packages::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tuned.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tuned::firewall_rules in JSON backend", > "Debug: Adding relationship from Sysctl::Value[net.ipv4.ip_forward] to Package[docker] with 'before'", > "Debug: Adding relationship from File[/etc/systemd/system/docker.service.d] to File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf] with 'before'", > "Debug: Adding relationship from File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf] to Exec[systemd daemon-reload] with 'notify'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[fs.inotify.max_user_instances] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[fs.suid_dumpable] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[kernel.dmesg_restrict] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[kernel.pid_max] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.core.netdev_max_backlog] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.all.arp_accept] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.all.log_martians] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.all.secure_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.all.send_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.default.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.default.log_martians] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.default.secure_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.conf.default.send_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.ip_forward] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.neigh.default.gc_thresh1] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.neigh.default.gc_thresh2] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.neigh.default.gc_thresh3] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.tcp_keepalive_intvl] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.tcp_keepalive_probes] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv4.tcp_keepalive_time] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.all.accept_ra] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.all.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.all.autoconf] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.all.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.default.accept_ra] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.default.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.default.autoconf] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.default.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.ipv6.conf.lo.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.netfilter.nf_conntrack_max] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack] to Sysctl[net.nf_conntrack_max] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[fs.inotify.max_user_instances] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[fs.suid_dumpable] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[kernel.dmesg_restrict] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[kernel.pid_max] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.core.netdev_max_backlog] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.all.arp_accept] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.all.log_martians] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.all.secure_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.all.send_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.default.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.default.log_martians] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.default.secure_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.conf.default.send_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.ip_forward] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.neigh.default.gc_thresh1] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.neigh.default.gc_thresh2] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.neigh.default.gc_thresh3] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.tcp_keepalive_intvl] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.tcp_keepalive_probes] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv4.tcp_keepalive_time] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.all.accept_ra] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.all.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.all.autoconf] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.all.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.default.accept_ra] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.default.accept_redirects] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.default.autoconf] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.default.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.ipv6.conf.lo.disable_ipv6] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.netfilter.nf_conntrack_max] with 'before'", > "Debug: Adding relationship from Exec[modprobe nf_conntrack_proto_sctp] to Sysctl[net.nf_conntrack_max] with 'before'", > "Debug: Adding relationship from Anchor[ntp::begin] to Class[Ntp::Install] with 'before'", > "Debug: Adding relationship from Class[Ntp::Install] to Class[Ntp::Config] with 'before'", > "Debug: Adding relationship from Class[Ntp::Config] to Class[Ntp::Service] with 'notify'", > "Debug: Adding relationship from Class[Ntp::Service] to Anchor[ntp::end] with 'before'", > "Debug: Adding relationship from Service[pcsd] to Exec[auth-successful-across-all-nodes] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from Class[Pacemaker] to Class[Pacemaker::Corosync] with 'before'", > "Debug: Adding relationship from File[/etc/systemd/system/resource-agents-deps.target.wants] to Systemd::Unit_file[docker.service] with 'before'", > "Debug: Adding relationship from Systemd::Unit_file[docker.service] to Class[Systemd::Systemctl::Daemon_reload] with 'notify'", > "Debug: Adding relationship from Anchor[ssh::server::start] to Class[Ssh::Server::Install] with 'before'", > "Debug: Adding relationship from Class[Ssh::Server::Install] to Class[Ssh::Server::Config] with 'before'", > "Debug: Adding relationship from Class[Ssh::Server::Config] to Class[Ssh::Server::Service] with 'notify'", > "Debug: Adding relationship from Class[Ssh::Server::Service] to Anchor[ssh::server::end] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[docker] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[chronyd] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[ntp] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[pcsd] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[corosync] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[pacemaker] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[sshd] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[firewalld] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[iptables] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[ip6tables] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Stage[runtime] to Stage[setup_infra] with 'before'", > "Debug: Adding relationship from Stage[setup_infra] to Stage[deploy_infra] with 'before'", > "Debug: Adding relationship from Stage[deploy_infra] to Stage[setup_app] with 'before'", > "Debug: Adding relationship from Stage[setup_app] to Stage[deploy_app] with 'before'", > "Debug: Adding relationship from Stage[deploy_app] to Stage[deploy] with 'before'", > "Notice: Compiled catalog for controller-2.localdomain in environment production in 4.28 seconds", > "Debug: /File[/etc/systemd/system/docker.service.d]/seluser: Found seluser default 'system_u' for /etc/systemd/system/docker.service.d", > "Debug: /File[/etc/systemd/system/docker.service.d]/selrole: Found selrole default 'object_r' for /etc/systemd/system/docker.service.d", > "Debug: /File[/etc/systemd/system/docker.service.d]/seltype: Found seltype default 'container_unit_file_t' for /etc/systemd/system/docker.service.d", > "Debug: /File[/etc/systemd/system/docker.service.d]/selrange: Found selrange default 's0' for /etc/systemd/system/docker.service.d", > "Debug: /File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/seluser: Found seluser default 'system_u' for /etc/systemd/system/docker.service.d/99-unset-mountflags.conf", > "Debug: /File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/selrole: Found selrole default 'object_r' for /etc/systemd/system/docker.service.d/99-unset-mountflags.conf", > "Debug: /File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/seltype: Found seltype default 'container_unit_file_t' for /etc/systemd/system/docker.service.d/99-unset-mountflags.conf", > "Debug: /File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/selrange: Found selrange default 's0' for /etc/systemd/system/docker.service.d/99-unset-mountflags.conf", > "Debug: /File[/etc/docker/daemon.json]/seluser: Found seluser default 'system_u' for /etc/docker/daemon.json", > "Debug: /File[/etc/docker/daemon.json]/selrole: Found selrole default 'object_r' for /etc/docker/daemon.json", > "Debug: /File[/etc/docker/daemon.json]/seltype: Found seltype default 'container_config_t' for /etc/docker/daemon.json", > "Debug: /File[/etc/docker/daemon.json]/selrange: Found selrange default 's0' for /etc/docker/daemon.json", > "Debug: /File[/var/lib/openstack]/seluser: Found seluser default 'system_u' for /var/lib/openstack", > "Debug: /File[/var/lib/openstack]/selrole: Found selrole default 'object_r' for /var/lib/openstack", > "Debug: /File[/var/lib/openstack]/seltype: Found seltype default 'var_lib_t' for /var/lib/openstack", > "Debug: /File[/var/lib/openstack]/selrange: Found selrange default 's0' for /var/lib/openstack", > "Debug: /File[/etc/ntp.conf]/seluser: Found seluser default 'system_u' for /etc/ntp.conf", > "Debug: /File[/etc/ntp.conf]/selrole: Found selrole default 'object_r' for /etc/ntp.conf", > "Debug: /File[/etc/ntp.conf]/seltype: Found seltype default 'net_conf_t' for /etc/ntp.conf", > "Debug: /File[/etc/ntp.conf]/selrange: Found selrange default 's0' for /etc/ntp.conf", > "Debug: /File[etc-pacemaker]/seluser: Found seluser default 'system_u' for /etc/pacemaker", > "Debug: /File[etc-pacemaker]/selrole: Found selrole default 'object_r' for /etc/pacemaker", > "Debug: /File[etc-pacemaker]/seltype: Found seltype default 'etc_t' for /etc/pacemaker", > "Debug: /File[etc-pacemaker]/selrange: Found selrange default 's0' for /etc/pacemaker", > "Debug: /File[etc-pacemaker-authkey]/seluser: Found seluser default 'system_u' for /etc/pacemaker/authkey", > "Debug: /File[etc-pacemaker-authkey]/selrole: Found selrole default 'object_r' for /etc/pacemaker/authkey", > "Debug: /File[etc-pacemaker-authkey]/seltype: Found seltype default 'etc_t' for /etc/pacemaker/authkey", > "Debug: /File[etc-pacemaker-authkey]/selrange: Found selrange default 's0' for /etc/pacemaker/authkey", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants]/seluser: Found seluser default 'system_u' for /etc/systemd/system/resource-agents-deps.target.wants", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants]/selrole: Found selrole default 'object_r' for /etc/systemd/system/resource-agents-deps.target.wants", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants]/seltype: Found seltype default 'systemd_unit_file_t' for /etc/systemd/system/resource-agents-deps.target.wants", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants]/selrange: Found selrange default 's0' for /etc/systemd/system/resource-agents-deps.target.wants", > "Debug: /File[/etc/localtime]/seluser: Found seluser default 'system_u' for /etc/localtime", > "Debug: /File[/etc/localtime]/selrole: Found selrole default 'object_r' for /etc/localtime", > "Debug: /File[/etc/localtime]/seltype: Found seltype default 'locale_t' for /etc/localtime", > "Debug: /File[/etc/localtime]/selrange: Found selrange default 's0' for /etc/localtime", > "Debug: /File[/etc/sysconfig/iptables]/seluser: Found seluser default 'system_u' for /etc/sysconfig/iptables", > "Debug: /File[/etc/sysconfig/iptables]/selrole: Found selrole default 'object_r' for /etc/sysconfig/iptables", > "Debug: /File[/etc/sysconfig/iptables]/seltype: Found seltype default 'system_conf_t' for /etc/sysconfig/iptables", > "Debug: /File[/etc/sysconfig/iptables]/selrange: Found selrange default 's0' for /etc/sysconfig/iptables", > "Debug: /File[/etc/sysconfig/ip6tables]/seluser: Found seluser default 'system_u' for /etc/sysconfig/ip6tables", > "Debug: /File[/etc/sysconfig/ip6tables]/selrole: Found selrole default 'object_r' for /etc/sysconfig/ip6tables", > "Debug: /File[/etc/sysconfig/ip6tables]/seltype: Found seltype default 'system_conf_t' for /etc/sysconfig/ip6tables", > "Debug: /File[/etc/sysconfig/ip6tables]/selrange: Found selrange default 's0' for /etc/sysconfig/ip6tables", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack.modules]/seluser: Found seluser default 'system_u' for /etc/sysconfig/modules/nf_conntrack.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack.modules]/selrole: Found selrole default 'object_r' for /etc/sysconfig/modules/nf_conntrack.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack.modules]/seltype: Found seltype default 'etc_t' for /etc/sysconfig/modules/nf_conntrack.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack.modules]/selrange: Found selrange default 's0' for /etc/sysconfig/modules/nf_conntrack.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/seluser: Found seluser default 'system_u' for /etc/sysconfig/modules/nf_conntrack_proto_sctp.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/selrole: Found selrole default 'object_r' for /etc/sysconfig/modules/nf_conntrack_proto_sctp.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/seltype: Found seltype default 'etc_t' for /etc/sysconfig/modules/nf_conntrack_proto_sctp.modules", > "Debug: /File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/selrange: Found selrange default 's0' for /etc/sysconfig/modules/nf_conntrack_proto_sctp.modules", > "Debug: /File[/etc/sysctl.conf]/seluser: Found seluser default 'system_u' for /etc/sysctl.conf", > "Debug: /File[/etc/sysctl.conf]/selrole: Found selrole default 'object_r' for /etc/sysctl.conf", > "Debug: /File[/etc/sysctl.conf]/seltype: Found seltype default 'system_conf_t' for /etc/sysctl.conf", > "Debug: /File[/etc/sysctl.conf]/selrange: Found selrange default 's0' for /etc/sysctl.conf", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/seluser: Found seluser default 'system_u' for /etc/systemd/system/resource-agents-deps.target.wants/docker.service", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/selrole: Found selrole default 'object_r' for /etc/systemd/system/resource-agents-deps.target.wants/docker.service", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/seltype: Found seltype default 'systemd_unit_file_t' for /etc/systemd/system/resource-agents-deps.target.wants/docker.service", > "Debug: /File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/selrange: Found selrange default 's0' for /etc/systemd/system/resource-agents-deps.target.wants/docker.service", > "Debug: /Firewall[000 accept related established rules ipv4]: [validate]", > "Debug: /Firewall[000 accept related established rules ipv6]: [validate]", > "Debug: /Firewall[001 accept all icmp ipv4]: [validate]", > "Debug: /Firewall[001 accept all icmp ipv6]: [validate]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: [validate]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: [validate]", > "Debug: /Firewall[003 accept ssh ipv4]: [validate]", > "Debug: /Firewall[003 accept ssh ipv6]: [validate]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: [validate]", > "Debug: /Firewall[998 log all ipv4]: [validate]", > "Debug: /Firewall[998 log all ipv6]: [validate]", > "Debug: /Firewall[999 drop all ipv4]: [validate]", > "Debug: /Firewall[999 drop all ipv6]: [validate]", > "Debug: /Firewall[128 aodh-api ipv4]: [validate]", > "Debug: /Firewall[128 aodh-api ipv6]: [validate]", > "Debug: /Firewall[113 ceph_mgr ipv4]: [validate]", > "Debug: /Firewall[113 ceph_mgr ipv6]: [validate]", > "Debug: /Firewall[110 ceph_mon ipv4]: [validate]", > "Debug: /Firewall[110 ceph_mon ipv6]: [validate]", > "Debug: /Firewall[119 cinder ipv4]: [validate]", > "Debug: /Firewall[119 cinder ipv6]: [validate]", > "Debug: /Firewall[120 iscsi initiator ipv4]: [validate]", > "Debug: /Firewall[120 iscsi initiator ipv6]: [validate]", > "Debug: /Firewall[112 glance_api ipv4]: [validate]", > "Debug: /Firewall[112 glance_api ipv6]: [validate]", > "Debug: /Firewall[129 gnocchi-api ipv4]: [validate]", > "Debug: /Firewall[129 gnocchi-api ipv6]: [validate]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: [validate]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: [validate]", > "Debug: /Firewall[107 haproxy stats ipv4]: [validate]", > "Debug: /Firewall[107 haproxy stats ipv6]: [validate]", > "Debug: /Firewall[125 heat_api ipv4]: [validate]", > "Debug: /Firewall[125 heat_api ipv6]: [validate]", > "Debug: /Firewall[125 heat_cfn ipv4]: [validate]", > "Debug: /Firewall[125 heat_cfn ipv6]: [validate]", > "Debug: /Firewall[127 horizon ipv4]: [validate]", > "Debug: /Firewall[127 horizon ipv6]: [validate]", > "Debug: /Firewall[111 keystone ipv4]: [validate]", > "Debug: /Firewall[111 keystone ipv6]: [validate]", > "Debug: /Firewall[121 memcached ipv4]: [validate]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: [validate]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: [validate]", > "Debug: /Firewall[114 neutron api ipv4]: [validate]", > "Debug: /Firewall[114 neutron api ipv6]: [validate]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: [validate]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: [validate]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: [validate]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: [validate]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: [validate]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: [validate]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: [validate]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: [validate]", > "Debug: /Firewall[136 neutron gre networks ipv4]: [validate]", > "Debug: /Firewall[136 neutron gre networks ipv6]: [validate]", > "Debug: /Firewall[113 nova_api ipv4]: [validate]", > "Debug: /Firewall[113 nova_api ipv6]: [validate]", > "Debug: /Firewall[138 nova_placement ipv4]: [validate]", > "Debug: /Firewall[138 nova_placement ipv6]: [validate]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: [validate]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: [validate]", > "Debug: /Firewall[105 ntp ipv4]: [validate]", > "Debug: /Firewall[105 ntp ipv6]: [validate]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: [validate]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: [validate]", > "Debug: /Firewall[131 pacemaker udp ipv4]: [validate]", > "Debug: /Firewall[131 pacemaker udp ipv6]: [validate]", > "Debug: /Firewall[140 panko-api ipv4]: [validate]", > "Debug: /Firewall[140 panko-api ipv6]: [validate]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: [validate]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: [validate]", > "Debug: /Firewall[108 redis-bundle ipv4]: [validate]", > "Debug: /Firewall[108 redis-bundle ipv6]: [validate]", > "Debug: /Firewall[122 swift proxy ipv4]: [validate]", > "Debug: /Firewall[122 swift proxy ipv6]: [validate]", > "Debug: /Firewall[123 swift storage ipv4]: [validate]", > "Debug: /Firewall[123 swift storage ipv6]: [validate]", > "Debug: Creating default schedules", > "Debug: /File[/etc/ssh/sshd_config]/seluser: Found seluser default 'system_u' for /etc/ssh/sshd_config", > "Debug: /File[/etc/ssh/sshd_config]/selrole: Found selrole default 'object_r' for /etc/ssh/sshd_config", > "Debug: /File[/etc/ssh/sshd_config]/seltype: Found seltype default 'etc_t' for /etc/ssh/sshd_config", > "Debug: /File[/etc/ssh/sshd_config]/selrange: Found selrange default 's0' for /etc/ssh/sshd_config", > "Info: Applying configuration version '1534432879'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]/require: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]/before: subscribes to File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/notify: subscribes to Exec[systemd daemon-reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Exec[systemd daemon-reload]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]/require: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]/subscribe: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]/subscribe: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/docker/daemon.json]/require: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-mirror]/require: subscribes to File[/etc/docker/daemon.json]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-mirror]/subscribe: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-mirror]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]/require: subscribes to File[/etc/docker/daemon.json]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]/subscribe: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]/require: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]/require: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/var/lib/openstack]/notify: subscribes to Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/before: subscribes to Package[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/require: subscribes to Class[Sysctl::Base]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/before: subscribes to Augeas[tripleo-mysql-client-conf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/before: subscribes to Class[Ntp]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Ntp/Anchor[ntp::begin]/before: subscribes to Class[Ntp::Install]", > "Debug: /Stage[main]/Ntp::Install/before: subscribes to Class[Ntp::Config]", > "Debug: /Stage[main]/Ntp::Config/notify: subscribes to Class[Ntp::Service]", > "Debug: /Stage[main]/Ntp::Service/before: subscribes to Anchor[ntp::end]", > "Debug: /Stage[main]/Ntp::Service/Service[ntp]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker/before: subscribes to Class[Pacemaker::Corosync]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Exec[auth-successful-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/before: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/notify: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/notify: subscribes to Exec[reauthenticate-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/require: subscribes to User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/before: subscribes to Systemd::Unit_file[docker.service]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/before: subscribes to Class[Pacemaker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Ssh::Server::Install/before: subscribes to Class[Ssh::Server::Config]", > "Debug: /Stage[main]/Ssh::Server::Config/notify: subscribes to Class[Ssh::Server::Service]", > "Debug: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/notify: subscribes to Service[sshd]", > "Debug: /Stage[main]/Ssh::Server::Service/before: subscribes to Anchor[ssh::server::end]", > "Debug: /Stage[main]/Ssh::Server::Service/Service[sshd]/require: subscribes to Class[Ssh::Server::Config]", > "Debug: /Stage[main]/Ssh::Server::Service/Service[sshd]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Ssh::Server/Anchor[ssh::server::start]/before: subscribes to Class[Ssh::Server::Install]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/require: subscribes to Package[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/require: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/subscribe: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/before: subscribes to Service[ip6tables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[setup]/before: subscribes to Stage[main]", > "Debug: /Stage[runtime]/require: subscribes to Stage[main]", > "Debug: /Stage[runtime]/before: subscribes to Stage[setup_infra]", > "Debug: /Stage[setup_infra]/before: subscribes to Stage[deploy_infra]", > "Debug: /Stage[deploy_infra]/before: subscribes to Stage[setup_app]", > "Debug: /Stage[setup_app]/before: subscribes to Stage[deploy_app]", > "Debug: /Stage[deploy_app]/before: subscribes to Stage[deploy]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Tuned/Exec[tuned-adm]/require: subscribes to Package[tuned]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[fs.inotify.max_user_instances]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[fs.suid_dumpable]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[kernel.dmesg_restrict]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[kernel.pid_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.core.netdev_max_backlog]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.all.arp_accept]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.all.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.all.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.all.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.default.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.default.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.conf.default.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.ip_forward]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh1]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh2]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh3]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_intvl]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_probes]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_time]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.all.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.all.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.all.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.all.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.default.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.default.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.default.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.ipv6.conf.lo.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.netfilter.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/Exec[modprobe nf_conntrack]/before: subscribes to Sysctl[net.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[fs.inotify.max_user_instances]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[fs.suid_dumpable]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[kernel.dmesg_restrict]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[kernel.pid_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.core.netdev_max_backlog]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.all.arp_accept]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.all.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.all.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.all.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.default.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.default.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.conf.default.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.ip_forward]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh1]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh2]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.neigh.default.gc_thresh3]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_intvl]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_probes]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv4.tcp_keepalive_time]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.all.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.all.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.all.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.all.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.default.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.default.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.default.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.ipv6.conf.lo.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.netfilter.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/before: subscribes to Sysctl[net.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/before: subscribes to Sysctl_runtime[fs.inotify.max_user_instances]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/before: subscribes to Sysctl_runtime[fs.suid_dumpable]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/before: subscribes to Sysctl_runtime[kernel.dmesg_restrict]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/before: subscribes to Sysctl_runtime[kernel.pid_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/before: subscribes to Sysctl_runtime[net.core.netdev_max_backlog]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/before: subscribes to Sysctl_runtime[net.ipv4.conf.all.arp_accept]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/before: subscribes to Sysctl_runtime[net.ipv4.conf.all.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/before: subscribes to Sysctl_runtime[net.ipv4.conf.all.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/before: subscribes to Sysctl_runtime[net.ipv4.conf.all.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/before: subscribes to Sysctl_runtime[net.ipv4.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/before: subscribes to Sysctl_runtime[net.ipv4.conf.default.log_martians]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/before: subscribes to Sysctl_runtime[net.ipv4.conf.default.secure_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/before: subscribes to Sysctl_runtime[net.ipv4.conf.default.send_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[net.ipv4.ip_forward]/before: subscribes to Sysctl_runtime[net.ipv4.ip_forward]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/before: subscribes to Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/before: subscribes to Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/before: subscribes to Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/before: subscribes to Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/before: subscribes to Sysctl_runtime[net.ipv4.tcp_keepalive_probes]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/before: subscribes to Sysctl_runtime[net.ipv4.tcp_keepalive_time]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/before: subscribes to Sysctl_runtime[net.ipv6.conf.all.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/before: subscribes to Sysctl_runtime[net.ipv6.conf.all.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/before: subscribes to Sysctl_runtime[net.ipv6.conf.all.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/before: subscribes to Sysctl_runtime[net.ipv6.conf.all.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/before: subscribes to Sysctl_runtime[net.ipv6.conf.default.accept_ra]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/before: subscribes to Sysctl_runtime[net.ipv6.conf.default.accept_redirects]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/before: subscribes to Sysctl_runtime[net.ipv6.conf.default.autoconf]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/before: subscribes to Sysctl_runtime[net.ipv6.conf.default.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/before: subscribes to Sysctl_runtime[net.ipv6.conf.lo.disable_ipv6]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/before: subscribes to Sysctl_runtime[net.netfilter.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/before: subscribes to Sysctl_runtime[net.nf_conntrack_max]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/Concat_file[/etc/ssh/sshd_config]/before: subscribes to File[/etc/ssh/sshd_config]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Adding autorequire relationship with File[/etc/systemd/system/resource-agents-deps.target.wants]", > "Debug: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/Concat_file[/etc/ssh/sshd_config]: Skipping automatic relationship with File[/etc/ssh/sshd_config]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[998 log all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[998 log all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller1]/ensure: created", > "Debug: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller1]: The container Class[Main] will propagate my refresh event", > "Debug: Class[Main]: The container Stage[main] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/File[/var/lib/openstack]/ensure: created", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/File[/var/lib/openstack]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/var/lib/openstack]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/groupadd docker'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Group[docker]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Group[docker]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Exec[directory-create-etc-my.cnf.d](provider=posix): Executing check 'test -d /etc/my.cnf.d'", > "Debug: Executing: 'test -d /etc/my.cnf.d'", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): sending command 'set' with params [\"/files/etc/my.cnf.d/tripleo.cnf/tripleo/bind-address\", \"172.17.1.21\"]", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): sending command 'rm' with params [\"/files/etc/my.cnf.d/tripleo.cnf/tripleo/ssl\"]", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): sending command 'rm' with params [\"/files/etc/my.cnf.d/tripleo.cnf/tripleo/ssl-ca\"]", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): Files changed, should execute", > "Debug: Augeas[tripleo-mysql-client-conf](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Debug: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]: The container Class[Tripleo::Profile::Base::Database::Mysql::Client] will propagate my refresh event", > "Debug: Class[Tripleo::Profile::Base::Database::Mysql::Client]: The container Stage[main] will propagate my refresh event", > "Debug: Executing: '/usr/bin/systemctl is-active chronyd'", > "Debug: Executing: '/usr/bin/systemctl is-enabled chronyd'", > "Debug: Executing: '/usr/bin/systemctl stop chronyd'", > "Debug: Executing: '/usr/bin/systemctl disable chronyd'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]: The container Class[Tripleo::Profile::Base::Time::Ntp] will propagate my refresh event", > "Debug: Class[Tripleo::Profile::Base::Time::Ntp]: The container Stage[main] will propagate my refresh event", > "Debug: Prefetching norpm resources for package", > "Debug: Executing: '/usr/bin/rpm -q ntp --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Info: Computing checksum on file /etc/ntp.conf", > "Info: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]: Filebucketed /etc/ntp.conf to puppet with sum 913c85f0fde85f83c2d6c030ecf259e9", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}537f072fe8f462b20e5e88f9121550b2'", > "Debug: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]: The container Class[Ntp::Config] will propagate my refresh event", > "Debug: Class[Ntp::Config]: The container Stage[main] will propagate my refresh event", > "Info: Class[Ntp::Config]: Scheduling refresh of Class[Ntp::Service]", > "Info: Class[Ntp::Service]: Scheduling refresh of Service[ntp]", > "Debug: Executing: '/usr/bin/systemctl is-active ntpd'", > "Debug: Executing: '/usr/bin/systemctl is-enabled ntpd'", > "Debug: Executing: '/usr/bin/systemctl unmask ntpd'", > "Debug: Executing: '/usr/bin/systemctl start ntpd'", > "Debug: Executing: '/usr/bin/systemctl enable ntpd'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Debug: /Stage[main]/Ntp::Service/Service[ntp]: The container Class[Ntp::Service] will propagate my refresh event", > "Info: /Stage[main]/Ntp::Service/Service[ntp]: Unscheduling refresh on Service[ntp]", > "Debug: Class[Ntp::Service]: The container Stage[main] will propagate my refresh event", > "Debug: Executing: '/usr/bin/rpm -q pacemaker --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/rpm -q pcs --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/rpm -q fence-agents-all --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/rpm -q pacemaker-libs --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/systemctl is-enabled corosync'", > "Debug: Executing: '/usr/bin/systemctl unmask corosync'", > "Debug: Executing: '/usr/bin/systemctl enable corosync'", > "Notice: /Stage[main]/Pacemaker::Service/Service[corosync]/enable: enable changed 'false' to 'true'", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: The container Class[Pacemaker::Service] will propagate my refresh event", > "Debug: Executing: '/usr/bin/systemctl is-enabled pacemaker'", > "Debug: Executing: '/usr/bin/systemctl unmask pacemaker'", > "Debug: Executing: '/usr/bin/systemctl enable pacemaker'", > "Notice: /Stage[main]/Pacemaker::Service/Service[pacemaker]/enable: enable changed 'false' to 'true'", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: The container Class[Pacemaker::Service] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]: The container Class[Tripleo::Profile::Base::Pacemaker] will propagate my refresh event", > "Debug: Executing: '/usr/bin/rpm -q openssh-server --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Debug: /Stage[main]/Timezone/File[/etc/localtime]: The container Class[Timezone] will propagate my refresh event", > "Debug: Class[Timezone]: The container Stage[main] will propagate my refresh event", > "Debug: Executing: '/usr/bin/rpm -q iptables --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/systemctl is-active firewalld'", > "Debug: Executing: '/usr/bin/systemctl is-enabled firewalld'", > "Debug: Executing: '/usr/bin/rpm -q iptables-services --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Executing: '/usr/bin/systemctl is-active iptables'", > "Debug: Executing: '/usr/bin/systemctl is-enabled iptables'", > "Debug: Executing: '/usr/bin/systemctl unmask iptables'", > "Debug: Executing: '/usr/bin/systemctl start iptables'", > "Debug: Executing: '/usr/bin/systemctl enable iptables'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]: The container Class[Firewall::Linux::Redhat] will propagate my refresh event", > "Info: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]: Unscheduling refresh on Service[iptables]", > "Debug: Executing: '/usr/bin/systemctl is-active ip6tables'", > "Debug: Executing: '/usr/bin/systemctl is-enabled ip6tables'", > "Debug: Executing: '/usr/bin/systemctl unmask ip6tables'", > "Debug: Executing: '/usr/bin/systemctl start ip6tables'", > "Debug: Executing: '/usr/bin/systemctl enable ip6tables'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]: The container Class[Firewall::Linux::Redhat] will propagate my refresh event", > "Info: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]: Unscheduling refresh on Service[ip6tables]", > "Debug: Executing: '/usr/bin/rpm -q tuned --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Debug: Exec[tuned-adm](provider=posix): Executing check 'tuned-adm active | grep -q '''", > "Debug: Executing: 'tuned-adm active | grep -q '''", > "Debug: Exec[modprobe nf_conntrack](provider=posix): Executing check 'egrep -q '^nf_conntrack ' /proc/modules'", > "Debug: Executing: 'egrep -q '^nf_conntrack ' /proc/modules'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]: The container Kmod::Load[nf_conntrack] will propagate my refresh event", > "Debug: Kmod::Load[nf_conntrack]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Debug: Exec[modprobe nf_conntrack_proto_sctp](provider=posix): Executing check 'egrep -q '^nf_conntrack_proto_sctp ' /proc/modules'", > "Debug: Executing: 'egrep -q '^nf_conntrack_proto_sctp ' /proc/modules'", > "Debug: Exec[modprobe nf_conntrack_proto_sctp](provider=posix): Executing 'modprobe nf_conntrack_proto_sctp'", > "Debug: Executing: 'modprobe nf_conntrack_proto_sctp'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]: The container Kmod::Load[nf_conntrack_proto_sctp] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]: The container Kmod::Load[nf_conntrack_proto_sctp] will propagate my refresh event", > "Debug: Kmod::Load[nf_conntrack_proto_sctp]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Debug: Prefetching parsed resources for sysctl", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Debug: Flushing sysctl provider target /etc/sysctl.conf", > "Info: Computing checksum on file /etc/sysctl.conf", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]: The container Sysctl::Value[fs.inotify.max_user_instances] will propagate my refresh event", > "Debug: Prefetching sysctl_runtime resources for sysctl_runtime", > "Debug: Executing: '/usr/sbin/sysctl -a'", > "Debug: Executing: '/usr/sbin/sysctl fs.inotify.max_user_instances=1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]: The container Sysctl::Value[fs.inotify.max_user_instances] will propagate my refresh event", > "Debug: Sysctl::Value[fs.inotify.max_user_instances]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]: The container Sysctl::Value[fs.suid_dumpable] will propagate my refresh event", > "Debug: Sysctl::Value[fs.suid_dumpable]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]: The container Sysctl::Value[kernel.dmesg_restrict] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl kernel.dmesg_restrict=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]: The container Sysctl::Value[kernel.dmesg_restrict] will propagate my refresh event", > "Debug: Sysctl::Value[kernel.dmesg_restrict]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]: The container Sysctl::Value[kernel.pid_max] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl kernel.pid_max=1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]: The container Sysctl::Value[kernel.pid_max] will propagate my refresh event", > "Debug: Sysctl::Value[kernel.pid_max]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]: The container Sysctl::Value[net.core.netdev_max_backlog] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.core.netdev_max_backlog=10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]: The container Sysctl::Value[net.core.netdev_max_backlog] will propagate my refresh event", > "Debug: Sysctl::Value[net.core.netdev_max_backlog]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]: The container Sysctl::Value[net.ipv4.conf.all.arp_accept] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.all.arp_accept=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]: The container Sysctl::Value[net.ipv4.conf.all.arp_accept] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.all.arp_accept]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]: The container Sysctl::Value[net.ipv4.conf.all.log_martians] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.all.log_martians=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]: The container Sysctl::Value[net.ipv4.conf.all.log_martians] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.all.log_martians]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]: The container Sysctl::Value[net.ipv4.conf.all.secure_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.all.secure_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]: The container Sysctl::Value[net.ipv4.conf.all.secure_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.all.secure_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]: The container Sysctl::Value[net.ipv4.conf.all.send_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.all.send_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]: The container Sysctl::Value[net.ipv4.conf.all.send_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.all.send_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]: The container Sysctl::Value[net.ipv4.conf.default.accept_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.default.accept_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]: The container Sysctl::Value[net.ipv4.conf.default.accept_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.default.accept_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]: The container Sysctl::Value[net.ipv4.conf.default.log_martians] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.default.log_martians=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]: The container Sysctl::Value[net.ipv4.conf.default.log_martians] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.default.log_martians]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]: The container Sysctl::Value[net.ipv4.conf.default.secure_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.default.secure_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]: The container Sysctl::Value[net.ipv4.conf.default.secure_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.default.secure_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]: The container Sysctl::Value[net.ipv4.conf.default.send_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.conf.default.send_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]: The container Sysctl::Value[net.ipv4.conf.default.send_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.conf.default.send_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[net.ipv4.ip_forward]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[net.ipv4.ip_forward]: The container Sysctl::Value[net.ipv4.ip_forward] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.ip_forward=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/Sysctl_runtime[net.ipv4.ip_forward]/val: val changed '0' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_forward]/Sysctl_runtime[net.ipv4.ip_forward]: The container Sysctl::Value[net.ipv4.ip_forward] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.ip_forward]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Debug: Executing: '/usr/bin/rpm -q docker --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/ensure: defined content as '{md5}b984426de0b5978853686a649b64e4b8'", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]: Scheduling refresh of Exec[systemd daemon-reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Exec[systemd daemon-reload](provider=posix): Executing 'systemctl daemon-reload'", > "Debug: Executing: 'systemctl daemon-reload'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Exec[systemd daemon-reload]: Triggered 'refresh' from 1 events", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Exec[systemd daemon-reload]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Exec[systemd daemon-reload]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): sending command 'set' with params [\"/files/etc/sysconfig/docker/OPTIONS\", \"\\\"--log-driver=journald --signature-verification=false --iptables=false --live-restore -H unix:///run/docker.sock -H unix:///var/lib/openstack/docker.sock\\\"\"]", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): Files changed, should execute", > "Debug: Augeas[docker-sysconfig-options](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]/returns: executed successfully", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): sending command 'set' with params [\"/files/etc/sysconfig/docker/INSECURE_REGISTRY\", \"\\\"--insecure-registry 192.168.24.1:8787\\\"\"]", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): Files changed, should execute", > "Debug: Augeas[docker-sysconfig-registry](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]/returns: executed successfully", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): sending command 'rm' with params [\"/files/etc/docker/daemon.json/dict/entry[. = \\\"registry-mirrors\\\"]\"]", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): Skipping because no files were changed", > "Debug: Augeas[docker-daemon.json-mirror](provider=augeas): Closed the augeas connection", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): sending command 'set' with params [\"/files/etc/docker/daemon.json/dict/entry[. = \\\"debug\\\"]\", \"debug\"]", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): sending command 'set' with params [\"/files/etc/docker/daemon.json/dict/entry[. = \\\"debug\\\"]/const\", \"true\"]", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): Files changed, should execute", > "Debug: Augeas[docker-daemon.json-debug](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]/returns: executed successfully", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): sending command 'set' with params [\"/files/etc/sysconfig/docker-storage/DOCKER_STORAGE_OPTIONS\", \"\\\" -s overlay2\\\"\"]", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): Files changed, should execute", > "Debug: Augeas[docker-sysconfig-storage](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]/returns: executed successfully", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): sending command 'set' with params [\"/files/etc/sysconfig/docker-network/DOCKER_NETWORK_OPTIONS\", \"\\\" --bip=172.31.0.1/24\\\"\"]", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): Files changed, should execute", > "Debug: Augeas[docker-sysconfig-network](provider=augeas): Closed the augeas connection", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]/returns: executed successfully", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]: Scheduling refresh of Service[docker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Debug: Executing: '/usr/bin/systemctl is-active docker'", > "Debug: Executing: '/usr/bin/systemctl is-enabled docker'", > "Debug: Executing: '/usr/bin/systemctl unmask docker'", > "Debug: Executing: '/usr/bin/systemctl start docker'", > "Debug: Executing: '/usr/bin/systemctl enable docker'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]/ensure: ensure changed 'stopped' to 'running'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]: The container Class[Tripleo::Profile::Base::Docker] will propagate my refresh event", > "Info: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]: Unscheduling refresh on Service[docker]", > "Debug: Class[Tripleo::Profile::Base::Docker]: The container Stage[main] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh1] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.neigh.default.gc_thresh1=1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh1] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh2] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.neigh.default.gc_thresh2=2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh2] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh3] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.neigh.default.gc_thresh3=4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]: The container Sysctl::Value[net.ipv4.neigh.default.gc_thresh3] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]: The container Sysctl::Value[net.ipv4.tcp_keepalive_intvl] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.tcp_keepalive_intvl=1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]: The container Sysctl::Value[net.ipv4.tcp_keepalive_intvl] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.tcp_keepalive_intvl]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]: The container Sysctl::Value[net.ipv4.tcp_keepalive_probes] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.tcp_keepalive_probes=5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]: The container Sysctl::Value[net.ipv4.tcp_keepalive_probes] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.tcp_keepalive_probes]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]: The container Sysctl::Value[net.ipv4.tcp_keepalive_time] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv4.tcp_keepalive_time=5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]: The container Sysctl::Value[net.ipv4.tcp_keepalive_time] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv4.tcp_keepalive_time]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]: The container Sysctl::Value[net.ipv6.conf.all.accept_ra] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.all.accept_ra=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]: The container Sysctl::Value[net.ipv6.conf.all.accept_ra] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.all.accept_ra]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]: The container Sysctl::Value[net.ipv6.conf.all.accept_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.all.accept_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]: The container Sysctl::Value[net.ipv6.conf.all.accept_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.all.accept_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]: The container Sysctl::Value[net.ipv6.conf.all.autoconf] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.all.autoconf=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]: The container Sysctl::Value[net.ipv6.conf.all.autoconf] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.all.autoconf]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]: The container Sysctl::Value[net.ipv6.conf.all.disable_ipv6] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.all.disable_ipv6]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]: The container Sysctl::Value[net.ipv6.conf.default.accept_ra] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.default.accept_ra=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]: The container Sysctl::Value[net.ipv6.conf.default.accept_ra] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.default.accept_ra]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]: The container Sysctl::Value[net.ipv6.conf.default.accept_redirects] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.default.accept_redirects=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]: The container Sysctl::Value[net.ipv6.conf.default.accept_redirects] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.default.accept_redirects]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]: The container Sysctl::Value[net.ipv6.conf.default.autoconf] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.ipv6.conf.default.autoconf=0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]: The container Sysctl::Value[net.ipv6.conf.default.autoconf] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.default.autoconf]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]: The container Sysctl::Value[net.ipv6.conf.default.disable_ipv6] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.default.disable_ipv6]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]: The container Sysctl::Value[net.ipv6.conf.lo.disable_ipv6] will propagate my refresh event", > "Debug: Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]: The container Sysctl::Value[net.netfilter.nf_conntrack_max] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.netfilter.nf_conntrack_max=500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]: The container Sysctl::Value[net.netfilter.nf_conntrack_max] will propagate my refresh event", > "Debug: Sysctl::Value[net.netfilter.nf_conntrack_max]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]: The container Sysctl::Value[net.nf_conntrack_max] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/sysctl net.nf_conntrack_max=500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Debug: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]: The container Sysctl::Value[net.nf_conntrack_max] will propagate my refresh event", > "Debug: Sysctl::Value[net.nf_conntrack_max]: The container Class[Tripleo::Profile::Base::Kernel] will propagate my refresh event", > "Debug: Class[Tripleo::Profile::Base::Kernel]: The container Stage[main] will propagate my refresh event", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/mode: Not managing symlink mode", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Info: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Scheduling refresh of Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: The container Systemd::Unit_file[docker.service] will propagate my refresh event", > "Debug: Systemd::Unit_file[docker.service]: The container Class[Tripleo::Profile::Base::Pacemaker] will propagate my refresh event", > "Info: Systemd::Unit_file[docker.service]: Scheduling refresh of Class[Systemd::Systemctl::Daemon_reload]", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: The container Stage[main] will propagate my refresh event", > "Debug: Executing: '/usr/bin/systemctl is-active pcsd'", > "Debug: Executing: '/usr/bin/systemctl is-enabled pcsd'", > "Debug: Executing: '/usr/bin/systemctl unmask pcsd'", > "Debug: Executing: '/usr/bin/systemctl start pcsd'", > "Debug: Executing: '/usr/bin/systemctl enable pcsd'", > "Notice: /Stage[main]/Pacemaker::Service/Service[pcsd]/ensure: ensure changed 'stopped' to 'running'", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: The container Class[Pacemaker::Service] will propagate my refresh event", > "Info: /Stage[main]/Pacemaker::Service/Service[pcsd]: Unscheduling refresh on Service[pcsd]", > "Debug: Class[Pacemaker::Service]: The container Stage[main] will propagate my refresh event", > "Debug: Executing: '/usr/sbin/usermod -p $6$ufP7TYmHTl$afq6IQ2XFeSuk2NDK9.yHyIugLpCcSzknaa3hDwYNozOLJdI/oHASVJ7xS2uzLEiZ7vCnrirtyb7xD.fzw8S8/ hacluster'", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/password: changed password", > "Debug: Executing: '/usr/sbin/usermod -G haclient hacluster'", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/groups: groups changed '' to ['haclient']", > "Info: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Scheduling refresh of Exec[reauthenticate-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/returns: Exec try 1/360", > "Debug: Exec[reauthenticate-across-all-nodes](provider=posix): Executing '/sbin/pcs cluster auth controller-0 controller-1 controller-2 -u hacluster -p a27rypXMwVPVqWHT --force'", > "Debug: Executing: '/sbin/pcs cluster auth controller-0 controller-1 controller-2 -u hacluster -p a27rypXMwVPVqWHT --force'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Triggered 'refresh' from 2 events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/ensure: created", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/ensure: defined content as '{md5}0935666a8d0f9bd85e683dd1382bd797'", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: Exec[wait-for-settle](provider=posix): Executing check '/sbin/pcs status | grep -q 'partition with quorum' > /dev/null 2>&1'", > "Debug: Executing: '/sbin/pcs status | grep -q 'partition with quorum' > /dev/null 2>&1'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/unless: Error: cluster is not currently running on this node", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 1/360", > "Debug: Exec[wait-for-settle](provider=posix): Executing '/sbin/pcs status | grep -q 'partition with quorum' > /dev/null 2>&1'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Sleeping for 10.0 seconds between tries", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 2/360", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 3/360", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 4/360", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 5/360", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: Exec try 6/360", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: executed successfully", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: Class[Pacemaker::Corosync]: The container Stage[main] will propagate my refresh event", > "Info: Class[Systemd::Systemctl::Daemon_reload]: Scheduling refresh of Exec[systemctl-daemon-reload]", > "Debug: Exec[systemctl-daemon-reload](provider=posix): Executing 'systemctl daemon-reload'", > "Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: The container Class[Systemd::Systemctl::Daemon_reload] will propagate my refresh event", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: The container Stage[main] will propagate my refresh event", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: The container Class[Systemd] will propagate my refresh event", > "Debug: Class[Systemd]: The container Stage[main] will propagate my refresh event", > "Info: Computing checksum on file /etc/ssh/sshd_config", > "Info: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]: Filebucketed /etc/ssh/sshd_config to puppet with sum 781dbef6518331ceaa1de16137f5328c", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}781dbef6518331ceaa1de16137f5328c' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Debug: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]: The container Concat[/etc/ssh/sshd_config] will propagate my refresh event", > "Debug: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]: The container /etc/ssh/sshd_config will propagate my refresh event", > "Debug: /etc/ssh/sshd_config: The container Concat[/etc/ssh/sshd_config] will propagate my refresh event", > "Debug: Concat[/etc/ssh/sshd_config]: The container Class[Ssh::Server::Config] will propagate my refresh event", > "Info: Concat[/etc/ssh/sshd_config]: Scheduling refresh of Service[sshd]", > "Debug: Class[Ssh::Server::Config]: The container Stage[main] will propagate my refresh event", > "Info: Class[Ssh::Server::Config]: Scheduling refresh of Class[Ssh::Server::Service]", > "Info: Class[Ssh::Server::Service]: Scheduling refresh of Service[sshd]", > "Debug: Executing: '/usr/bin/systemctl is-active sshd'", > "Debug: Executing: '/usr/bin/systemctl is-enabled sshd'", > "Debug: Executing: '/usr/bin/systemctl restart sshd'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Debug: /Stage[main]/Ssh::Server::Service/Service[sshd]: The container Class[Ssh::Server::Service] will propagate my refresh event", > "Debug: Class[Ssh::Server::Service]: The container Stage[main] will propagate my refresh event", > "Debug: Prefetching iptables resources for firewall", > "Debug: Puppet::Type::Firewall::ProviderIptables: [prefetch(resources)]", > "Debug: Puppet::Type::Firewall::ProviderIptables: [instances]", > "Debug: Executing: '/usr/sbin/iptables-save'", > "Debug: Firewall[000 accept related established rules ipv4](provider=iptables): Inserting rule 000 accept related established rules ipv4", > "Debug: Firewall[000 accept related established rules ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[000 accept related established rules ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 1 --wait -t filter -p all -m state --state ESTABLISHED,RELATED -j ACCEPT -m comment --comment 000 accept related established rules ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Debug: Firewall[000 accept related established rules ipv4](provider=iptables): [flush]", > "Debug: Firewall[000 accept related established rules ipv4](provider=iptables): [persist_iptables]", > "Debug: Executing: '/usr/libexec/iptables/iptables.init save'", > "Debug: /Firewall[000 accept related established rules ipv4]: The container Tripleo::Firewall::Rule[000 accept related established rules] will propagate my refresh event", > "Debug: Prefetching ip6tables resources for firewall", > "Debug: Puppet::Type::Firewall::ProviderIp6tables: [prefetch(resources)]", > "Debug: Puppet::Type::Firewall::ProviderIp6tables: [instances]", > "Debug: Executing: '/usr/sbin/ip6tables-save'", > "Debug: Firewall[000 accept related established rules ipv6](provider=ip6tables): Inserting rule 000 accept related established rules ipv6", > "Debug: Firewall[000 accept related established rules ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[000 accept related established rules ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 1 --wait -t filter -p all -m state --state ESTABLISHED,RELATED -j ACCEPT -m comment --comment 000 accept related established rules ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Debug: Firewall[000 accept related established rules ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[000 accept related established rules ipv6](provider=ip6tables): [persist_iptables]", > "Debug: Executing: '/usr/libexec/iptables/ip6tables.init save'", > "Debug: /Firewall[000 accept related established rules ipv6]: The container Tripleo::Firewall::Rule[000 accept related established rules] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[000 accept related established rules]: The container Class[Tripleo::Firewall::Pre] will propagate my refresh event", > "Debug: Firewall[001 accept all icmp ipv4](provider=iptables): Inserting rule 001 accept all icmp ipv4", > "Debug: Firewall[001 accept all icmp ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[001 accept all icmp ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 2 --wait -t filter -p icmp -m state --state NEW -j ACCEPT -m comment --comment 001 accept all icmp ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Debug: Firewall[001 accept all icmp ipv4](provider=iptables): [flush]", > "Debug: Firewall[001 accept all icmp ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: The container Tripleo::Firewall::Rule[001 accept all icmp] will propagate my refresh event", > "Debug: Firewall[001 accept all icmp ipv6](provider=ip6tables): Inserting rule 001 accept all icmp ipv6", > "Debug: Firewall[001 accept all icmp ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[001 accept all icmp ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 2 --wait -t filter -p ipv6-icmp -m state --state NEW -j ACCEPT -m comment --comment 001 accept all icmp ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Debug: Firewall[001 accept all icmp ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[001 accept all icmp ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: The container Tripleo::Firewall::Rule[001 accept all icmp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[001 accept all icmp]: The container Class[Tripleo::Firewall::Pre] will propagate my refresh event", > "Debug: Firewall[002 accept all to lo interface ipv4](provider=iptables): Inserting rule 002 accept all to lo interface ipv4", > "Debug: Firewall[002 accept all to lo interface ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[002 accept all to lo interface ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 3 --wait -t filter -i lo -p all -m state --state NEW -j ACCEPT -m comment --comment 002 accept all to lo interface ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Debug: Firewall[002 accept all to lo interface ipv4](provider=iptables): [flush]", > "Debug: Firewall[002 accept all to lo interface ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: The container Tripleo::Firewall::Rule[002 accept all to lo interface] will propagate my refresh event", > "Debug: Firewall[002 accept all to lo interface ipv6](provider=ip6tables): Inserting rule 002 accept all to lo interface ipv6", > "Debug: Firewall[002 accept all to lo interface ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[002 accept all to lo interface ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 3 --wait -t filter -i lo -p all -m state --state NEW -j ACCEPT -m comment --comment 002 accept all to lo interface ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Debug: Firewall[002 accept all to lo interface ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[002 accept all to lo interface ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: The container Tripleo::Firewall::Rule[002 accept all to lo interface] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[002 accept all to lo interface]: The container Class[Tripleo::Firewall::Pre] will propagate my refresh event", > "Debug: Firewall[003 accept ssh ipv4](provider=iptables): Inserting rule 003 accept ssh ipv4", > "Debug: Firewall[003 accept ssh ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[003 accept ssh ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 4 --wait -t filter -p tcp -m multiport --dports 22 -m state --state NEW -j ACCEPT -m comment --comment 003 accept ssh ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Debug: Firewall[003 accept ssh ipv4](provider=iptables): [flush]", > "Debug: Firewall[003 accept ssh ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: The container Tripleo::Firewall::Rule[003 accept ssh] will propagate my refresh event", > "Debug: Firewall[003 accept ssh ipv6](provider=ip6tables): Inserting rule 003 accept ssh ipv6", > "Debug: Firewall[003 accept ssh ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[003 accept ssh ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 4 --wait -t filter -p tcp -m multiport --dports 22 -m state --state NEW -j ACCEPT -m comment --comment 003 accept ssh ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Debug: Firewall[003 accept ssh ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[003 accept ssh ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: The container Tripleo::Firewall::Rule[003 accept ssh] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[003 accept ssh]: The container Class[Tripleo::Firewall::Pre] will propagate my refresh event", > "Debug: Firewall[004 accept ipv6 dhcpv6 ipv6](provider=ip6tables): Inserting rule 004 accept ipv6 dhcpv6 ipv6", > "Debug: Firewall[004 accept ipv6 dhcpv6 ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[004 accept ipv6 dhcpv6 ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 5 --wait -t filter -d fe80::/64 -p udp -m multiport --dports 546 -m state --state NEW -j ACCEPT -m comment --comment 004 accept ipv6 dhcpv6 ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Debug: Firewall[004 accept ipv6 dhcpv6 ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[004 accept ipv6 dhcpv6 ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: The container Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]: The container Class[Tripleo::Firewall::Pre] will propagate my refresh event", > "Debug: Class[Tripleo::Firewall::Pre]: The container Stage[main] will propagate my refresh event", > "Debug: Firewall[998 log all ipv4](provider=iptables): Inserting rule 998 log all ipv4", > "Debug: Firewall[998 log all ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[998 log all ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p all -m state --state NEW -j LOG -m comment --comment 998 log all ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Debug: Firewall[998 log all ipv4](provider=iptables): [flush]", > "Debug: Firewall[998 log all ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[998 log all ipv4]: The container Tripleo::Firewall::Rule[998 log all] will propagate my refresh event", > "Debug: Firewall[998 log all ipv6](provider=ip6tables): Inserting rule 998 log all ipv6", > "Debug: Firewall[998 log all ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[998 log all ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p all -m state --state NEW -j LOG -m comment --comment 998 log all ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Debug: Firewall[998 log all ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[998 log all ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[998 log all ipv6]: The container Tripleo::Firewall::Rule[998 log all] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[998 log all]: The container Class[Tripleo::Firewall::Post] will propagate my refresh event", > "Debug: Firewall[999 drop all ipv4](provider=iptables): Inserting rule 999 drop all ipv4", > "Debug: Firewall[999 drop all ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[999 drop all ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p all -m state --state NEW -j DROP -m comment --comment 999 drop all ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Debug: Firewall[999 drop all ipv4](provider=iptables): [flush]", > "Debug: Firewall[999 drop all ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[999 drop all ipv4]: The container Tripleo::Firewall::Rule[999 drop all] will propagate my refresh event", > "Debug: Firewall[999 drop all ipv6](provider=ip6tables): Inserting rule 999 drop all ipv6", > "Debug: Firewall[999 drop all ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[999 drop all ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p all -m state --state NEW -j DROP -m comment --comment 999 drop all ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Debug: Firewall[999 drop all ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[999 drop all ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[999 drop all ipv6]: The container Tripleo::Firewall::Rule[999 drop all] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[999 drop all]: The container Class[Tripleo::Firewall::Post] will propagate my refresh event", > "Debug: Class[Tripleo::Firewall::Post]: The container Stage[main] will propagate my refresh event", > "Debug: Firewall[128 aodh-api ipv4](provider=iptables): Inserting rule 128 aodh-api ipv4", > "Debug: Firewall[128 aodh-api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[128 aodh-api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 8042,13042 -m state --state NEW -j ACCEPT -m comment --comment 128 aodh-api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/ensure: created", > "Debug: Firewall[128 aodh-api ipv4](provider=iptables): [flush]", > "Debug: Firewall[128 aodh-api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: The container Tripleo::Firewall::Rule[128 aodh-api] will propagate my refresh event", > "Debug: Firewall[128 aodh-api ipv6](provider=ip6tables): Inserting rule 128 aodh-api ipv6", > "Debug: Firewall[128 aodh-api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[128 aodh-api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 8042,13042 -m state --state NEW -j ACCEPT -m comment --comment 128 aodh-api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/ensure: created", > "Debug: Firewall[128 aodh-api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[128 aodh-api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: The container Tripleo::Firewall::Rule[128 aodh-api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[128 aodh-api]: The container Tripleo::Firewall::Service_rules[aodh_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[aodh_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[113 ceph_mgr ipv4](provider=iptables): Inserting rule 113 ceph_mgr ipv4", > "Debug: Firewall[113 ceph_mgr ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[113 ceph_mgr ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 6800:7300 -m state --state NEW -j ACCEPT -m comment --comment 113 ceph_mgr ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/ensure: created", > "Debug: Firewall[113 ceph_mgr ipv4](provider=iptables): [flush]", > "Debug: Firewall[113 ceph_mgr ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: The container Tripleo::Firewall::Rule[113 ceph_mgr] will propagate my refresh event", > "Debug: Firewall[113 ceph_mgr ipv6](provider=ip6tables): Inserting rule 113 ceph_mgr ipv6", > "Debug: Firewall[113 ceph_mgr ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[113 ceph_mgr ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 6800:7300 -m state --state NEW -j ACCEPT -m comment --comment 113 ceph_mgr ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/ensure: created", > "Debug: Firewall[113 ceph_mgr ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[113 ceph_mgr ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: The container Tripleo::Firewall::Rule[113 ceph_mgr] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[113 ceph_mgr]: The container Tripleo::Firewall::Service_rules[ceph_mgr] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[ceph_mgr]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[110 ceph_mon ipv4](provider=iptables): Inserting rule 110 ceph_mon ipv4", > "Debug: Firewall[110 ceph_mon ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[110 ceph_mon ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 6789 -m state --state NEW -j ACCEPT -m comment --comment 110 ceph_mon ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/ensure: created", > "Debug: Firewall[110 ceph_mon ipv4](provider=iptables): [flush]", > "Debug: Firewall[110 ceph_mon ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: The container Tripleo::Firewall::Rule[110 ceph_mon] will propagate my refresh event", > "Debug: Firewall[110 ceph_mon ipv6](provider=ip6tables): Inserting rule 110 ceph_mon ipv6", > "Debug: Firewall[110 ceph_mon ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[110 ceph_mon ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 6789 -m state --state NEW -j ACCEPT -m comment --comment 110 ceph_mon ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/ensure: created", > "Debug: Firewall[110 ceph_mon ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[110 ceph_mon ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: The container Tripleo::Firewall::Rule[110 ceph_mon] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[110 ceph_mon]: The container Tripleo::Firewall::Service_rules[ceph_mon] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[ceph_mon]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[119 cinder ipv4](provider=iptables): Inserting rule 119 cinder ipv4", > "Debug: Firewall[119 cinder ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[119 cinder ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 8776,13776 -m state --state NEW -j ACCEPT -m comment --comment 119 cinder ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/ensure: created", > "Debug: Firewall[119 cinder ipv4](provider=iptables): [flush]", > "Debug: Firewall[119 cinder ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[119 cinder ipv4]: The container Tripleo::Firewall::Rule[119 cinder] will propagate my refresh event", > "Debug: Firewall[119 cinder ipv6](provider=ip6tables): Inserting rule 119 cinder ipv6", > "Debug: Firewall[119 cinder ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[119 cinder ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 8776,13776 -m state --state NEW -j ACCEPT -m comment --comment 119 cinder ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/ensure: created", > "Debug: Firewall[119 cinder ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[119 cinder ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[119 cinder ipv6]: The container Tripleo::Firewall::Rule[119 cinder] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[119 cinder]: The container Tripleo::Firewall::Service_rules[cinder_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[cinder_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[120 iscsi initiator ipv4](provider=iptables): Inserting rule 120 iscsi initiator ipv4", > "Debug: Firewall[120 iscsi initiator ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[120 iscsi initiator ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 3260 -m state --state NEW -j ACCEPT -m comment --comment 120 iscsi initiator ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/ensure: created", > "Debug: Firewall[120 iscsi initiator ipv4](provider=iptables): [flush]", > "Debug: Firewall[120 iscsi initiator ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: The container Tripleo::Firewall::Rule[120 iscsi initiator] will propagate my refresh event", > "Debug: Firewall[120 iscsi initiator ipv6](provider=ip6tables): Inserting rule 120 iscsi initiator ipv6", > "Debug: Firewall[120 iscsi initiator ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[120 iscsi initiator ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 3260 -m state --state NEW -j ACCEPT -m comment --comment 120 iscsi initiator ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/ensure: created", > "Debug: Firewall[120 iscsi initiator ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[120 iscsi initiator ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: The container Tripleo::Firewall::Rule[120 iscsi initiator] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[120 iscsi initiator]: The container Tripleo::Firewall::Service_rules[cinder_volume] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[cinder_volume]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[112 glance_api ipv4](provider=iptables): Inserting rule 112 glance_api ipv4", > "Debug: Firewall[112 glance_api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[112 glance_api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 9292,13292 -m state --state NEW -j ACCEPT -m comment --comment 112 glance_api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/ensure: created", > "Debug: Firewall[112 glance_api ipv4](provider=iptables): [flush]", > "Debug: Firewall[112 glance_api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[112 glance_api ipv4]: The container Tripleo::Firewall::Rule[112 glance_api] will propagate my refresh event", > "Debug: Firewall[112 glance_api ipv6](provider=ip6tables): Inserting rule 112 glance_api ipv6", > "Debug: Firewall[112 glance_api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[112 glance_api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 9292,13292 -m state --state NEW -j ACCEPT -m comment --comment 112 glance_api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/ensure: created", > "Debug: Firewall[112 glance_api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[112 glance_api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[112 glance_api ipv6]: The container Tripleo::Firewall::Rule[112 glance_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[112 glance_api]: The container Tripleo::Firewall::Service_rules[glance_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[glance_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[129 gnocchi-api ipv4](provider=iptables): Inserting rule 129 gnocchi-api ipv4", > "Debug: Firewall[129 gnocchi-api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[129 gnocchi-api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 8041,13041 -m state --state NEW -j ACCEPT -m comment --comment 129 gnocchi-api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/ensure: created", > "Debug: Firewall[129 gnocchi-api ipv4](provider=iptables): [flush]", > "Debug: Firewall[129 gnocchi-api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: The container Tripleo::Firewall::Rule[129 gnocchi-api] will propagate my refresh event", > "Debug: Firewall[129 gnocchi-api ipv6](provider=ip6tables): Inserting rule 129 gnocchi-api ipv6", > "Debug: Firewall[129 gnocchi-api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[129 gnocchi-api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8041,13041 -m state --state NEW -j ACCEPT -m comment --comment 129 gnocchi-api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/ensure: created", > "Debug: Firewall[129 gnocchi-api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[129 gnocchi-api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: The container Tripleo::Firewall::Rule[129 gnocchi-api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[129 gnocchi-api]: The container Tripleo::Firewall::Service_rules[gnocchi_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[140 gnocchi-statsd ipv4](provider=iptables): Inserting rule 140 gnocchi-statsd ipv4", > "Debug: Firewall[140 gnocchi-statsd ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[140 gnocchi-statsd ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p udp -m multiport --dports 8125 -m state --state NEW -j ACCEPT -m comment --comment 140 gnocchi-statsd ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/ensure: created", > "Debug: Firewall[140 gnocchi-statsd ipv4](provider=iptables): [flush]", > "Debug: Firewall[140 gnocchi-statsd ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: The container Tripleo::Firewall::Rule[140 gnocchi-statsd] will propagate my refresh event", > "Debug: Firewall[140 gnocchi-statsd ipv6](provider=ip6tables): Inserting rule 140 gnocchi-statsd ipv6", > "Debug: Firewall[140 gnocchi-statsd ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[140 gnocchi-statsd ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p udp -m multiport --dports 8125 -m state --state NEW -j ACCEPT -m comment --comment 140 gnocchi-statsd ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/ensure: created", > "Debug: Firewall[140 gnocchi-statsd ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[140 gnocchi-statsd ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: The container Tripleo::Firewall::Rule[140 gnocchi-statsd] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[140 gnocchi-statsd]: The container Tripleo::Firewall::Service_rules[gnocchi_statsd] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_statsd]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[107 haproxy stats ipv4](provider=iptables): Inserting rule 107 haproxy stats ipv4", > "Debug: Firewall[107 haproxy stats ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[107 haproxy stats ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 1993 -m state --state NEW -j ACCEPT -m comment --comment 107 haproxy stats ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/ensure: created", > "Debug: Firewall[107 haproxy stats ipv4](provider=iptables): [flush]", > "Debug: Firewall[107 haproxy stats ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: The container Tripleo::Firewall::Rule[107 haproxy stats] will propagate my refresh event", > "Debug: Firewall[107 haproxy stats ipv6](provider=ip6tables): Inserting rule 107 haproxy stats ipv6", > "Debug: Firewall[107 haproxy stats ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[107 haproxy stats ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 1993 -m state --state NEW -j ACCEPT -m comment --comment 107 haproxy stats ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/ensure: created", > "Debug: Firewall[107 haproxy stats ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[107 haproxy stats ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: The container Tripleo::Firewall::Rule[107 haproxy stats] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[107 haproxy stats]: The container Tripleo::Firewall::Service_rules[haproxy] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[haproxy]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[125 heat_api ipv4](provider=iptables): Inserting rule 125 heat_api ipv4", > "Debug: Firewall[125 heat_api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[125 heat_api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 8004,13004 -m state --state NEW -j ACCEPT -m comment --comment 125 heat_api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/ensure: created", > "Debug: Firewall[125 heat_api ipv4](provider=iptables): [flush]", > "Debug: Firewall[125 heat_api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[125 heat_api ipv4]: The container Tripleo::Firewall::Rule[125 heat_api] will propagate my refresh event", > "Debug: Firewall[125 heat_api ipv6](provider=ip6tables): Inserting rule 125 heat_api ipv6", > "Debug: Firewall[125 heat_api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[125 heat_api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8004,13004 -m state --state NEW -j ACCEPT -m comment --comment 125 heat_api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/ensure: created", > "Debug: Firewall[125 heat_api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[125 heat_api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[125 heat_api ipv6]: The container Tripleo::Firewall::Rule[125 heat_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[125 heat_api]: The container Tripleo::Firewall::Service_rules[heat_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[heat_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[125 heat_cfn ipv4](provider=iptables): Inserting rule 125 heat_cfn ipv4", > "Debug: Firewall[125 heat_cfn ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[125 heat_cfn ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8000,13800 -m state --state NEW -j ACCEPT -m comment --comment 125 heat_cfn ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/ensure: created", > "Debug: Firewall[125 heat_cfn ipv4](provider=iptables): [flush]", > "Debug: Firewall[125 heat_cfn ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: The container Tripleo::Firewall::Rule[125 heat_cfn] will propagate my refresh event", > "Debug: Firewall[125 heat_cfn ipv6](provider=ip6tables): Inserting rule 125 heat_cfn ipv6", > "Debug: Firewall[125 heat_cfn ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[125 heat_cfn ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 8000,13800 -m state --state NEW -j ACCEPT -m comment --comment 125 heat_cfn ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/ensure: created", > "Debug: Firewall[125 heat_cfn ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[125 heat_cfn ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: The container Tripleo::Firewall::Rule[125 heat_cfn] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[125 heat_cfn]: The container Tripleo::Firewall::Service_rules[heat_api_cfn] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cfn]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[127 horizon ipv4](provider=iptables): Inserting rule 127 horizon ipv4", > "Debug: Firewall[127 horizon ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[127 horizon ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 80,443 -m state --state NEW -j ACCEPT -m comment --comment 127 horizon ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/ensure: created", > "Debug: Firewall[127 horizon ipv4](provider=iptables): [flush]", > "Debug: Firewall[127 horizon ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[127 horizon ipv4]: The container Tripleo::Firewall::Rule[127 horizon] will propagate my refresh event", > "Debug: Firewall[127 horizon ipv6](provider=ip6tables): Inserting rule 127 horizon ipv6", > "Debug: Firewall[127 horizon ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[127 horizon ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 14 --wait -t filter -p tcp -m multiport --dports 80,443 -m state --state NEW -j ACCEPT -m comment --comment 127 horizon ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/ensure: created", > "Debug: Firewall[127 horizon ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[127 horizon ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[127 horizon ipv6]: The container Tripleo::Firewall::Rule[127 horizon] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[127 horizon]: The container Tripleo::Firewall::Service_rules[horizon] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[horizon]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[111 keystone ipv4](provider=iptables): Inserting rule 111 keystone ipv4", > "Debug: Firewall[111 keystone ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[111 keystone ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 5000,13000,35357 -m state --state NEW -j ACCEPT -m comment --comment 111 keystone ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/ensure: created", > "Debug: Firewall[111 keystone ipv4](provider=iptables): [flush]", > "Debug: Firewall[111 keystone ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[111 keystone ipv4]: The container Tripleo::Firewall::Rule[111 keystone] will propagate my refresh event", > "Debug: Firewall[111 keystone ipv6](provider=ip6tables): Inserting rule 111 keystone ipv6", > "Debug: Firewall[111 keystone ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[111 keystone ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 5000,13000,35357 -m state --state NEW -j ACCEPT -m comment --comment 111 keystone ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/ensure: created", > "Debug: Firewall[111 keystone ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[111 keystone ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[111 keystone ipv6]: The container Tripleo::Firewall::Rule[111 keystone] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[111 keystone]: The container Tripleo::Firewall::Service_rules[keystone] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[keystone]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[121 memcached ipv4](provider=iptables): Inserting rule 121 memcached ipv4", > "Debug: Firewall[121 memcached ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[121 memcached ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -s 172.17.1.0/24 -p tcp -m multiport --dports 11211 -m state --state NEW -j ACCEPT -m comment --comment 121 memcached ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/ensure: created", > "Debug: Firewall[121 memcached ipv4](provider=iptables): [flush]", > "Debug: Firewall[121 memcached ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[121 memcached ipv4]: The container Tripleo::Firewall::Rule[121 memcached] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[121 memcached]: The container Tripleo::Firewall::Service_rules[memcached] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[memcached]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[104 mysql galera-bundle ipv4](provider=iptables): Inserting rule 104 mysql galera-bundle ipv4", > "Debug: Firewall[104 mysql galera-bundle ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[104 mysql galera-bundle ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 873,3123,3306,4444,4567,4568,9200 -m state --state NEW -j ACCEPT -m comment --comment 104 mysql galera-bundle ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/ensure: created", > "Debug: Firewall[104 mysql galera-bundle ipv4](provider=iptables): [flush]", > "Debug: Firewall[104 mysql galera-bundle ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: The container Tripleo::Firewall::Rule[104 mysql galera-bundle] will propagate my refresh event", > "Debug: Firewall[104 mysql galera-bundle ipv6](provider=ip6tables): Inserting rule 104 mysql galera-bundle ipv6", > "Debug: Firewall[104 mysql galera-bundle ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[104 mysql galera-bundle ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 873,3123,3306,4444,4567,4568,9200 -m state --state NEW -j ACCEPT -m comment --comment 104 mysql galera-bundle ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/ensure: created", > "Debug: Firewall[104 mysql galera-bundle ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[104 mysql galera-bundle ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: The container Tripleo::Firewall::Rule[104 mysql galera-bundle] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[104 mysql galera-bundle]: The container Tripleo::Firewall::Service_rules[mysql] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[mysql]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[114 neutron api ipv4](provider=iptables): Inserting rule 114 neutron api ipv4", > "Debug: Firewall[114 neutron api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[114 neutron api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 9696,13696 -m state --state NEW -j ACCEPT -m comment --comment 114 neutron api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/ensure: created", > "Debug: Firewall[114 neutron api ipv4](provider=iptables): [flush]", > "Debug: Firewall[114 neutron api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[114 neutron api ipv4]: The container Tripleo::Firewall::Rule[114 neutron api] will propagate my refresh event", > "Debug: Firewall[114 neutron api ipv6](provider=ip6tables): Inserting rule 114 neutron api ipv6", > "Debug: Firewall[114 neutron api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[114 neutron api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 9696,13696 -m state --state NEW -j ACCEPT -m comment --comment 114 neutron api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/ensure: created", > "Debug: Firewall[114 neutron api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[114 neutron api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[114 neutron api ipv6]: The container Tripleo::Firewall::Rule[114 neutron api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[114 neutron api]: The container Tripleo::Firewall::Service_rules[neutron_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[neutron_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[115 neutron dhcp input ipv4](provider=iptables): Inserting rule 115 neutron dhcp input ipv4", > "Debug: Firewall[115 neutron dhcp input ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[115 neutron dhcp input ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p udp -m multiport --dports 67 -m state --state NEW -j ACCEPT -m comment --comment 115 neutron dhcp input ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/ensure: created", > "Debug: Firewall[115 neutron dhcp input ipv4](provider=iptables): [flush]", > "Debug: Firewall[115 neutron dhcp input ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: The container Tripleo::Firewall::Rule[115 neutron dhcp input] will propagate my refresh event", > "Debug: Firewall[115 neutron dhcp input ipv6](provider=ip6tables): Inserting rule 115 neutron dhcp input ipv6", > "Debug: Firewall[115 neutron dhcp input ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[115 neutron dhcp input ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p udp -m multiport --dports 67 -m state --state NEW -j ACCEPT -m comment --comment 115 neutron dhcp input ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/ensure: created", > "Debug: Firewall[115 neutron dhcp input ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[115 neutron dhcp input ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: The container Tripleo::Firewall::Rule[115 neutron dhcp input] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[115 neutron dhcp input]: The container Tripleo::Firewall::Service_rules[neutron_dhcp] will propagate my refresh event", > "Debug: Firewall[116 neutron dhcp output ipv4](provider=iptables): Inserting rule 116 neutron dhcp output ipv4", > "Debug: Firewall[116 neutron dhcp output ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[116 neutron dhcp output ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I OUTPUT 1 --wait -t filter -p udp -m multiport --dports 68 -m state --state NEW -j ACCEPT -m comment --comment 116 neutron dhcp output ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/ensure: created", > "Debug: Firewall[116 neutron dhcp output ipv4](provider=iptables): [flush]", > "Debug: Firewall[116 neutron dhcp output ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: The container Tripleo::Firewall::Rule[116 neutron dhcp output] will propagate my refresh event", > "Debug: Firewall[116 neutron dhcp output ipv6](provider=ip6tables): Inserting rule 116 neutron dhcp output ipv6", > "Debug: Firewall[116 neutron dhcp output ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[116 neutron dhcp output ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I OUTPUT 1 --wait -t filter -p udp -m multiport --dports 68 -m state --state NEW -j ACCEPT -m comment --comment 116 neutron dhcp output ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/ensure: created", > "Debug: Firewall[116 neutron dhcp output ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[116 neutron dhcp output ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: The container Tripleo::Firewall::Rule[116 neutron dhcp output] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[116 neutron dhcp output]: The container Tripleo::Firewall::Service_rules[neutron_dhcp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[neutron_dhcp]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[106 neutron_l3 vrrp ipv4](provider=iptables): Inserting rule 106 neutron_l3 vrrp ipv4", > "Debug: Firewall[106 neutron_l3 vrrp ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[106 neutron_l3 vrrp ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p vrrp -m state --state NEW -j ACCEPT -m comment --comment 106 neutron_l3 vrrp ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/ensure: created", > "Debug: Firewall[106 neutron_l3 vrrp ipv4](provider=iptables): [flush]", > "Debug: Firewall[106 neutron_l3 vrrp ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: The container Tripleo::Firewall::Rule[106 neutron_l3 vrrp] will propagate my refresh event", > "Debug: Firewall[106 neutron_l3 vrrp ipv6](provider=ip6tables): Inserting rule 106 neutron_l3 vrrp ipv6", > "Debug: Firewall[106 neutron_l3 vrrp ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[106 neutron_l3 vrrp ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p vrrp -m state --state NEW -j ACCEPT -m comment --comment 106 neutron_l3 vrrp ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/ensure: created", > "Debug: Firewall[106 neutron_l3 vrrp ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[106 neutron_l3 vrrp ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: The container Tripleo::Firewall::Rule[106 neutron_l3 vrrp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[106 neutron_l3 vrrp]: The container Tripleo::Firewall::Service_rules[neutron_l3] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[neutron_l3]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[118 neutron vxlan networks ipv4](provider=iptables): Inserting rule 118 neutron vxlan networks ipv4", > "Debug: Firewall[118 neutron vxlan networks ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[118 neutron vxlan networks ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 14 --wait -t filter -p udp -m multiport --dports 4789 -m state --state NEW -j ACCEPT -m comment --comment 118 neutron vxlan networks ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Debug: Firewall[118 neutron vxlan networks ipv4](provider=iptables): [flush]", > "Debug: Firewall[118 neutron vxlan networks ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: The container Tripleo::Firewall::Rule[118 neutron vxlan networks] will propagate my refresh event", > "Debug: Firewall[118 neutron vxlan networks ipv6](provider=ip6tables): Inserting rule 118 neutron vxlan networks ipv6", > "Debug: Firewall[118 neutron vxlan networks ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[118 neutron vxlan networks ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 15 --wait -t filter -p udp -m multiport --dports 4789 -m state --state NEW -j ACCEPT -m comment --comment 118 neutron vxlan networks ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Debug: Firewall[118 neutron vxlan networks ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[118 neutron vxlan networks ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: The container Tripleo::Firewall::Rule[118 neutron vxlan networks] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[118 neutron vxlan networks]: The container Tripleo::Firewall::Service_rules[neutron_ovs_agent] will propagate my refresh event", > "Debug: Firewall[136 neutron gre networks ipv4](provider=iptables): Inserting rule 136 neutron gre networks ipv4", > "Debug: Firewall[136 neutron gre networks ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[136 neutron gre networks ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 23 --wait -t filter -p gre -j ACCEPT -m comment --comment 136 neutron gre networks ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Debug: Firewall[136 neutron gre networks ipv4](provider=iptables): [flush]", > "Debug: Firewall[136 neutron gre networks ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: The container Tripleo::Firewall::Rule[136 neutron gre networks] will propagate my refresh event", > "Debug: Firewall[136 neutron gre networks ipv6](provider=ip6tables): Inserting rule 136 neutron gre networks ipv6", > "Debug: Firewall[136 neutron gre networks ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[136 neutron gre networks ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 23 --wait -t filter -p gre -j ACCEPT -m comment --comment 136 neutron gre networks ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Debug: Firewall[136 neutron gre networks ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[136 neutron gre networks ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: The container Tripleo::Firewall::Rule[136 neutron gre networks] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[136 neutron gre networks]: The container Tripleo::Firewall::Service_rules[neutron_ovs_agent] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[neutron_ovs_agent]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[113 nova_api ipv4](provider=iptables): Inserting rule 113 nova_api ipv4", > "Debug: Firewall[113 nova_api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[113 nova_api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8774,13774,8775 -m state --state NEW -j ACCEPT -m comment --comment 113 nova_api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/ensure: created", > "Debug: Firewall[113 nova_api ipv4](provider=iptables): [flush]", > "Debug: Firewall[113 nova_api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[113 nova_api ipv4]: The container Tripleo::Firewall::Rule[113 nova_api] will propagate my refresh event", > "Debug: Firewall[113 nova_api ipv6](provider=ip6tables): Inserting rule 113 nova_api ipv6", > "Debug: Firewall[113 nova_api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[113 nova_api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 8774,13774,8775 -m state --state NEW -j ACCEPT -m comment --comment 113 nova_api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/ensure: created", > "Debug: Firewall[113 nova_api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[113 nova_api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[113 nova_api ipv6]: The container Tripleo::Firewall::Rule[113 nova_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[113 nova_api]: The container Tripleo::Firewall::Service_rules[nova_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[nova_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[138 nova_placement ipv4](provider=iptables): Inserting rule 138 nova_placement ipv4", > "Debug: Firewall[138 nova_placement ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[138 nova_placement ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 8778,13778 -m state --state NEW -j ACCEPT -m comment --comment 138 nova_placement ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/ensure: created", > "Debug: Firewall[138 nova_placement ipv4](provider=iptables): [flush]", > "Debug: Firewall[138 nova_placement ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: The container Tripleo::Firewall::Rule[138 nova_placement] will propagate my refresh event", > "Debug: Firewall[138 nova_placement ipv6](provider=ip6tables): Inserting rule 138 nova_placement ipv6", > "Debug: Firewall[138 nova_placement ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[138 nova_placement ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 8778,13778 -m state --state NEW -j ACCEPT -m comment --comment 138 nova_placement ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/ensure: created", > "Debug: Firewall[138 nova_placement ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[138 nova_placement ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: The container Tripleo::Firewall::Rule[138 nova_placement] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[138 nova_placement]: The container Tripleo::Firewall::Service_rules[nova_placement] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[nova_placement]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[137 nova_vnc_proxy ipv4](provider=iptables): Inserting rule 137 nova_vnc_proxy ipv4", > "Debug: Firewall[137 nova_vnc_proxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[137 nova_vnc_proxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 6080,13080 -m state --state NEW -j ACCEPT -m comment --comment 137 nova_vnc_proxy ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/ensure: created", > "Debug: Firewall[137 nova_vnc_proxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[137 nova_vnc_proxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: The container Tripleo::Firewall::Rule[137 nova_vnc_proxy] will propagate my refresh event", > "Debug: Firewall[137 nova_vnc_proxy ipv6](provider=ip6tables): Inserting rule 137 nova_vnc_proxy ipv6", > "Debug: Firewall[137 nova_vnc_proxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[137 nova_vnc_proxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 6080,13080 -m state --state NEW -j ACCEPT -m comment --comment 137 nova_vnc_proxy ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/ensure: created", > "Debug: Firewall[137 nova_vnc_proxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[137 nova_vnc_proxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: The container Tripleo::Firewall::Rule[137 nova_vnc_proxy] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[137 nova_vnc_proxy]: The container Tripleo::Firewall::Service_rules[nova_vnc_proxy] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[nova_vnc_proxy]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[105 ntp ipv4](provider=iptables): Inserting rule 105 ntp ipv4", > "Debug: Firewall[105 ntp ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[105 ntp ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p udp -m multiport --dports 123 -m state --state NEW -j ACCEPT -m comment --comment 105 ntp ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Debug: Firewall[105 ntp ipv4](provider=iptables): [flush]", > "Debug: Firewall[105 ntp ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[105 ntp ipv4]: The container Tripleo::Firewall::Rule[105 ntp] will propagate my refresh event", > "Debug: Firewall[105 ntp ipv6](provider=ip6tables): Inserting rule 105 ntp ipv6", > "Debug: Firewall[105 ntp ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[105 ntp ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p udp -m multiport --dports 123 -m state --state NEW -j ACCEPT -m comment --comment 105 ntp ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Debug: Firewall[105 ntp ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[105 ntp ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[105 ntp ipv6]: The container Tripleo::Firewall::Rule[105 ntp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[105 ntp]: The container Tripleo::Firewall::Service_rules[ntp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[ntp]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[130 pacemaker tcp ipv4](provider=iptables): Inserting rule 130 pacemaker tcp ipv4", > "Debug: Firewall[130 pacemaker tcp ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[130 pacemaker tcp ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 2224,3121,21064 -m state --state NEW -j ACCEPT -m comment --comment 130 pacemaker tcp ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/ensure: created", > "Debug: Firewall[130 pacemaker tcp ipv4](provider=iptables): [flush]", > "Debug: Firewall[130 pacemaker tcp ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: The container Tripleo::Firewall::Rule[130 pacemaker tcp] will propagate my refresh event", > "Debug: Firewall[130 pacemaker tcp ipv6](provider=ip6tables): Inserting rule 130 pacemaker tcp ipv6", > "Debug: Firewall[130 pacemaker tcp ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[130 pacemaker tcp ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 2224,3121,21064 -m state --state NEW -j ACCEPT -m comment --comment 130 pacemaker tcp ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/ensure: created", > "Debug: Firewall[130 pacemaker tcp ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[130 pacemaker tcp ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: The container Tripleo::Firewall::Rule[130 pacemaker tcp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[130 pacemaker tcp]: The container Tripleo::Firewall::Service_rules[pacemaker] will propagate my refresh event", > "Debug: Firewall[131 pacemaker udp ipv4](provider=iptables): Inserting rule 131 pacemaker udp ipv4", > "Debug: Firewall[131 pacemaker udp ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[131 pacemaker udp ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 26 --wait -t filter -p udp -m multiport --dports 5405 -m state --state NEW -j ACCEPT -m comment --comment 131 pacemaker udp ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/ensure: created", > "Debug: Firewall[131 pacemaker udp ipv4](provider=iptables): [flush]", > "Debug: Firewall[131 pacemaker udp ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: The container Tripleo::Firewall::Rule[131 pacemaker udp] will propagate my refresh event", > "Debug: Firewall[131 pacemaker udp ipv6](provider=ip6tables): Inserting rule 131 pacemaker udp ipv6", > "Debug: Firewall[131 pacemaker udp ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[131 pacemaker udp ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 26 --wait -t filter -p udp -m multiport --dports 5405 -m state --state NEW -j ACCEPT -m comment --comment 131 pacemaker udp ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/ensure: created", > "Debug: Firewall[131 pacemaker udp ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[131 pacemaker udp ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: The container Tripleo::Firewall::Rule[131 pacemaker udp] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[131 pacemaker udp]: The container Tripleo::Firewall::Service_rules[pacemaker] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[pacemaker]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[140 panko-api ipv4](provider=iptables): Inserting rule 140 panko-api ipv4", > "Debug: Firewall[140 panko-api ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[140 panko-api ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 31 --wait -t filter -p tcp -m multiport --dports 8977,13977 -m state --state NEW -j ACCEPT -m comment --comment 140 panko-api ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/ensure: created", > "Debug: Firewall[140 panko-api ipv4](provider=iptables): [flush]", > "Debug: Firewall[140 panko-api ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[140 panko-api ipv4]: The container Tripleo::Firewall::Rule[140 panko-api] will propagate my refresh event", > "Debug: Firewall[140 panko-api ipv6](provider=ip6tables): Inserting rule 140 panko-api ipv6", > "Debug: Firewall[140 panko-api ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[140 panko-api ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 31 --wait -t filter -p tcp -m multiport --dports 8977,13977 -m state --state NEW -j ACCEPT -m comment --comment 140 panko-api ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/ensure: created", > "Debug: Firewall[140 panko-api ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[140 panko-api ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[140 panko-api ipv6]: The container Tripleo::Firewall::Rule[140 panko-api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[140 panko-api]: The container Tripleo::Firewall::Service_rules[panko_api] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[panko_api]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[109 rabbitmq-bundle ipv4](provider=iptables): Inserting rule 109 rabbitmq-bundle ipv4", > "Debug: Firewall[109 rabbitmq-bundle ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[109 rabbitmq-bundle ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 3122,4369,5672,25672 -m state --state NEW -j ACCEPT -m comment --comment 109 rabbitmq-bundle ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/ensure: created", > "Debug: Firewall[109 rabbitmq-bundle ipv4](provider=iptables): [flush]", > "Debug: Firewall[109 rabbitmq-bundle ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: The container Tripleo::Firewall::Rule[109 rabbitmq-bundle] will propagate my refresh event", > "Debug: Firewall[109 rabbitmq-bundle ipv6](provider=ip6tables): Inserting rule 109 rabbitmq-bundle ipv6", > "Debug: Firewall[109 rabbitmq-bundle ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[109 rabbitmq-bundle ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 10 --wait -t filter -p tcp -m multiport --dports 3122,4369,5672,25672 -m state --state NEW -j ACCEPT -m comment --comment 109 rabbitmq-bundle ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[rabbitmq]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/ensure: created", > "Debug: Firewall[109 rabbitmq-bundle ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[109 rabbitmq-bundle ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: The container Tripleo::Firewall::Rule[109 rabbitmq-bundle] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[109 rabbitmq-bundle]: The container Tripleo::Firewall::Service_rules[rabbitmq] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[rabbitmq]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[108 redis-bundle ipv4](provider=iptables): Inserting rule 108 redis-bundle ipv4", > "Debug: Firewall[108 redis-bundle ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[108 redis-bundle ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 3124,6379,26379 -m state --state NEW -j ACCEPT -m comment --comment 108 redis-bundle ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/ensure: created", > "Debug: Firewall[108 redis-bundle ipv4](provider=iptables): [flush]", > "Debug: Firewall[108 redis-bundle ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: The container Tripleo::Firewall::Rule[108 redis-bundle] will propagate my refresh event", > "Debug: Firewall[108 redis-bundle ipv6](provider=ip6tables): Inserting rule 108 redis-bundle ipv6", > "Debug: Firewall[108 redis-bundle ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[108 redis-bundle ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 10 --wait -t filter -p tcp -m multiport --dports 3124,6379,26379 -m state --state NEW -j ACCEPT -m comment --comment 108 redis-bundle ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/ensure: created", > "Debug: Firewall[108 redis-bundle ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[108 redis-bundle ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: The container Tripleo::Firewall::Rule[108 redis-bundle] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[108 redis-bundle]: The container Tripleo::Firewall::Service_rules[redis] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[redis]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[122 swift proxy ipv4](provider=iptables): Inserting rule 122 swift proxy ipv4", > "Debug: Firewall[122 swift proxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[122 swift proxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 22 --wait -t filter -p tcp -m multiport --dports 8080,13808 -m state --state NEW -j ACCEPT -m comment --comment 122 swift proxy ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/ensure: created", > "Debug: Firewall[122 swift proxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[122 swift proxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: The container Tripleo::Firewall::Rule[122 swift proxy] will propagate my refresh event", > "Debug: Firewall[122 swift proxy ipv6](provider=ip6tables): Inserting rule 122 swift proxy ipv6", > "Debug: Firewall[122 swift proxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[122 swift proxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 22 --wait -t filter -p tcp -m multiport --dports 8080,13808 -m state --state NEW -j ACCEPT -m comment --comment 122 swift proxy ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/ensure: created", > "Debug: Firewall[122 swift proxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[122 swift proxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: The container Tripleo::Firewall::Rule[122 swift proxy] will propagate my refresh event", > "Debug: Tripleo::Firewall::Rule[122 swift proxy]: The container Tripleo::Firewall::Service_rules[swift_proxy] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[swift_proxy]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Firewall[123 swift storage ipv4](provider=iptables): Inserting rule 123 swift storage ipv4", > "Debug: Firewall[123 swift storage ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[123 swift storage ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 23 --wait -t filter -p tcp -m multiport --dports 873,6000,6001,6002 -m state --state NEW -j ACCEPT -m comment --comment 123 swift storage ipv4'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/ensure: created", > "Debug: Firewall[123 swift storage ipv4](provider=iptables): [flush]", > "Debug: Firewall[123 swift storage ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[123 swift storage ipv4]: The container Tripleo::Firewall::Rule[123 swift storage] will propagate my refresh event", > "Debug: Firewall[123 swift storage ipv6](provider=ip6tables): Inserting rule 123 swift storage ipv6", > "Debug: Firewall[123 swift storage ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[123 swift storage ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 23 --wait -t filter -p tcp -m multiport --dports 873,6000,6001,6002 -m state --state NEW -j ACCEPT -m comment --comment 123 swift storage ipv6'", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/ensure: created", > "Debug: Firewall[123 swift storage ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[123 swift storage ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[123 swift storage ipv6]: The container Tripleo::Firewall::Rule[123 swift storage] will propagate my refresh event", > "Debug: Class[Firewall::Linux::Redhat]: The container Stage[main] will propagate my refresh event", > "Debug: Exec[nonpersistent_v4_rules_cleanup](provider=posix): Executing check '/bin/test -f /etc/sysconfig/iptables && /bin/grep -q neutron- /etc/sysconfig/iptables'", > "Debug: Executing: '/bin/test -f /etc/sysconfig/iptables && /bin/grep -q neutron- /etc/sysconfig/iptables'", > "Debug: Exec[nonpersistent_v6_rules_cleanup](provider=posix): Executing check '/bin/test -f /etc/sysconfig/ip6tables && /bin/grep -q neutron- /etc/sysconfig/ip6tables'", > "Debug: Executing: '/bin/test -f /etc/sysconfig/ip6tables && /bin/grep -q neutron- /etc/sysconfig/ip6tables'", > "Debug: Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup](provider=posix): Executing check '/bin/test -f /etc/sysconfig/iptables'", > "Debug: Executing: '/bin/test -f /etc/sysconfig/iptables'", > "Debug: Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup](provider=posix): Executing check '/bin/grep -v \"\\-m comment \\--comment\" /etc/sysconfig/iptables | /bin/grep -q ironic-inspector'", > "Debug: Executing: '/bin/grep -v \"\\-m comment \\--comment\" /etc/sysconfig/iptables | /bin/grep -q ironic-inspector'", > "Debug: Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup](provider=posix): Executing check '/bin/test -f /etc/sysconfig/ip6tables'", > "Debug: Executing: '/bin/test -f /etc/sysconfig/ip6tables'", > "Debug: Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup](provider=posix): Executing check '/bin/grep -v \"\\-m comment \\--comment\" /etc/sysconfig/ip6tables | /bin/grep -q ironic-inspector'", > "Debug: Executing: '/bin/grep -v \"\\-m comment \\--comment\" /etc/sysconfig/ip6tables | /bin/grep -q ironic-inspector'", > "Debug: Tripleo::Firewall::Rule[123 swift storage]: The container Tripleo::Firewall::Service_rules[swift_storage] will propagate my refresh event", > "Debug: Tripleo::Firewall::Service_rules[swift_storage]: The container Class[Tripleo::Firewall] will propagate my refresh event", > "Debug: Class[Tripleo::Firewall]: The container Stage[main] will propagate my refresh event", > "Debug: Finishing transaction 39349540", > "Debug: Storing state", > "Info: Creating state file /var/lib/puppet/state/state.yaml", > "Debug: Stored state in 0.03 seconds", > "Notice: Applied catalog in 76.28 seconds", > "Changes:", > " Total: 166", > "Events:", > " Success: 166", > "Resources:", > " Changed: 165", > " Out of sync: 165", > " Total: 212", > " Restarted: 4", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " File line: 0.00", > " Package manifest: 0.00", > " Group: 0.02", > " User: 0.05", > " Sysctl: 0.06", > " File: 0.21", > " Sysctl runtime: 0.29", > " Augeas: 0.35", > " Package: 0.41", > " Firewall: 15.16", > " Last run: 1534432960", > " Service: 3.86", > " Config retrieval: 5.02", > " Exec: 52.49", > " Total: 77.93", > " Concat fragment: 0.00", > "Version:", > " Config: 1534432879", > " Puppet: 4.8.2", > "Debug: Applying settings catalog for sections reporting, metrics", > "Debug: Finishing transaction 58968940", > "Debug: Received report to process from controller-2.localdomain", > "Debug: Processing report from controller-2.localdomain with processor Puppet::Reports::Store", > "erlexec: HOME must be set", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ip_address instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp\", 56]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 35]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ssh/manifests/server.pp\", 12]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 42]" > ] > } > > TASK [Run docker-puppet tasks (generate config) during step 1] ***************** > ok: [localhost] > > TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 1] *** > fatal: [localhost]: FAILED! => { > "failed_when_result": true, > "outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": [ > "2018-08-16 15:22:45,330 INFO: 22712 -- Running docker-puppet", > "2018-08-16 15:22:45,331 INFO: 22712 -- Service compilation completed.", > "2018-08-16 15:22:45,332 INFO: 22712 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-08-16 15:22:45,344 INFO: 22713 -- Starting configuration of nova_placement using image 192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-08-14.4", > "2018-08-16 15:22:45,344 INFO: 22714 -- Starting configuration of heat_api using image 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4", > "2018-08-16 15:22:45,344 INFO: 22715 -- Starting configuration of mysql using image 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4", > "2018-08-16 15:22:45,345 INFO: 22713 -- Removing container: docker-puppet-nova_placement", > "2018-08-16 15:22:45,346 INFO: 22714 -- Removing container: docker-puppet-heat_api", > "2018-08-16 15:22:45,346 INFO: 22715 -- Removing container: docker-puppet-mysql", > "2018-08-16 15:22:45,389 INFO: 22713 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-nova-placement-api:2018-08-14.4", > "2018-08-16 15:22:45,389 INFO: 22715 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4", > "2018-08-16 15:22:45,390 INFO: 22714 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4", > "2018-08-16 15:23:12,729 INFO: 22715 -- Removing container: docker-puppet-mysql", > "2018-08-16 15:23:12,775 INFO: 22715 -- Finished processing puppet configs for mysql", > "2018-08-16 15:23:12,775 INFO: 22715 -- Starting configuration of gnocchi using image 192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4", > "2018-08-16 15:23:12,776 INFO: 22715 -- Removing container: docker-puppet-gnocchi", > "2018-08-16 15:23:12,805 INFO: 22715 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-gnocchi-api:2018-08-14.4", > "2018-08-16 15:23:20,402 INFO: 22714 -- Removing container: docker-puppet-heat_api", > "2018-08-16 15:23:20,467 INFO: 22714 -- Finished processing puppet configs for heat_api", > "2018-08-16 15:23:20,467 INFO: 22714 -- Starting configuration of swift_ringbuilder using image 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4", > "2018-08-16 15:23:20,468 INFO: 22714 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-08-16 15:23:20,504 INFO: 22714 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4", > "2018-08-16 15:23:22,004 INFO: 22713 -- Removing container: docker-puppet-nova_placement", > "2018-08-16 15:23:22,073 INFO: 22713 -- Finished processing puppet configs for nova_placement", > "2018-08-16 15:23:22,073 INFO: 22713 -- Starting configuration of aodh using image 192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-08-14.4", > "2018-08-16 15:23:22,074 INFO: 22713 -- Removing container: docker-puppet-aodh", > "2018-08-16 15:23:22,106 INFO: 22713 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-aodh-api:2018-08-14.4", > "2018-08-16 15:23:35,316 INFO: 22715 -- Removing container: docker-puppet-gnocchi", > "2018-08-16 15:23:35,372 INFO: 22715 -- Finished processing puppet configs for gnocchi", > "2018-08-16 15:23:35,373 INFO: 22715 -- Starting configuration of clustercheck using image 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4", > "2018-08-16 15:23:35,373 INFO: 22715 -- Removing container: docker-puppet-clustercheck", > "2018-08-16 15:23:35,398 INFO: 22715 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-mariadb:2018-08-14.4", > "2018-08-16 15:23:38,694 INFO: 22713 -- Removing container: docker-puppet-aodh", > "2018-08-16 15:23:38,754 INFO: 22713 -- Finished processing puppet configs for aodh", > "2018-08-16 15:23:38,754 INFO: 22713 -- Starting configuration of nova using image 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4", > "2018-08-16 15:23:38,755 INFO: 22713 -- Removing container: docker-puppet-nova", > "2018-08-16 15:23:38,783 INFO: 22713 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-nova-api:2018-08-14.4", > "2018-08-16 15:23:39,586 INFO: 22714 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-08-16 15:23:39,658 INFO: 22714 -- Finished processing puppet configs for swift_ringbuilder", > "2018-08-16 15:23:39,659 INFO: 22714 -- Starting configuration of glance_api using image 192.168.24.1:8787/rhosp13/openstack-glance-api:2018-08-14.4", > "2018-08-16 15:23:39,659 INFO: 22714 -- Removing container: docker-puppet-glance_api", > "2018-08-16 15:23:39,686 INFO: 22714 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-glance-api:2018-08-14.4", > "2018-08-16 15:23:43,394 INFO: 22715 -- Removing container: docker-puppet-clustercheck", > "2018-08-16 15:23:43,438 INFO: 22715 -- Finished processing puppet configs for clustercheck", > "2018-08-16 15:23:43,438 INFO: 22715 -- Starting configuration of redis using image 192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4", > "2018-08-16 15:23:43,438 INFO: 22715 -- Removing container: docker-puppet-redis", > "2018-08-16 15:23:43,466 INFO: 22715 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-redis:2018-08-14.4", > "2018-08-16 15:23:54,810 INFO: 22715 -- Removing container: docker-puppet-redis", > "2018-08-16 15:23:54,852 INFO: 22715 -- Finished processing puppet configs for redis", > "2018-08-16 15:23:54,852 INFO: 22715 -- Starting configuration of memcached using image 192.168.24.1:8787/rhosp13/openstack-memcached:2018-08-14.4", > "2018-08-16 15:23:54,853 INFO: 22715 -- Removing container: docker-puppet-memcached", > "2018-08-16 15:23:54,878 INFO: 22715 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-memcached:2018-08-14.4", > "2018-08-16 15:23:57,900 INFO: 22713 -- Removing container: docker-puppet-nova", > "2018-08-16 15:23:57,954 INFO: 22713 -- Finished processing puppet configs for nova", > "2018-08-16 15:23:57,954 INFO: 22713 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp13/openstack-iscsid:2018-08-14.4", > "2018-08-16 15:23:57,955 INFO: 22713 -- Removing container: docker-puppet-iscsid", > "2018-08-16 15:23:57,981 INFO: 22713 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-iscsid:2018-08-14.4", > "2018-08-16 15:24:00,158 INFO: 22714 -- Removing container: docker-puppet-glance_api", > "2018-08-16 15:24:00,206 INFO: 22714 -- Finished processing puppet configs for glance_api", > "2018-08-16 15:24:00,206 INFO: 22714 -- Starting configuration of keystone using image 192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4", > "2018-08-16 15:24:00,207 INFO: 22714 -- Removing container: docker-puppet-keystone", > "2018-08-16 15:24:00,231 INFO: 22714 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-keystone:2018-08-14.4", > "2018-08-16 15:24:04,539 INFO: 22715 -- Removing container: docker-puppet-memcached", > "2018-08-16 15:24:04,584 INFO: 22715 -- Finished processing puppet configs for memcached", > "2018-08-16 15:24:04,585 INFO: 22715 -- Starting configuration of panko using image 192.168.24.1:8787/rhosp13/openstack-panko-api:2018-08-14.4", > "2018-08-16 15:24:04,585 INFO: 22715 -- Removing container: docker-puppet-panko", > "2018-08-16 15:24:04,609 INFO: 22715 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-panko-api:2018-08-14.4", > "2018-08-16 15:24:06,050 INFO: 22713 -- Removing container: docker-puppet-iscsid", > "2018-08-16 15:24:06,096 INFO: 22713 -- Finished processing puppet configs for iscsid", > "2018-08-16 15:24:06,097 INFO: 22713 -- Starting configuration of heat using image 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4", > "2018-08-16 15:24:06,098 INFO: 22713 -- Removing container: docker-puppet-heat", > "2018-08-16 15:24:06,124 INFO: 22713 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-heat-api:2018-08-14.4", > "2018-08-16 15:24:17,762 INFO: 22713 -- Removing container: docker-puppet-heat", > "2018-08-16 15:24:17,797 INFO: 22713 -- Finished processing puppet configs for heat", > "2018-08-16 15:24:17,798 INFO: 22713 -- Starting configuration of cinder using image 192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4", > "2018-08-16 15:24:17,798 INFO: 22713 -- Removing container: docker-puppet-cinder", > "2018-08-16 15:24:17,827 INFO: 22713 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-cinder-api:2018-08-14.4", > "2018-08-16 15:24:17,994 INFO: 22714 -- Removing container: docker-puppet-keystone", > "2018-08-16 15:24:18,058 INFO: 22714 -- Finished processing puppet configs for keystone", > "2018-08-16 15:24:18,059 INFO: 22714 -- Starting configuration of swift using image 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4", > "2018-08-16 15:24:18,059 INFO: 22714 -- Removing container: docker-puppet-swift", > "2018-08-16 15:24:18,088 INFO: 22714 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-swift-proxy-server:2018-08-14.4", > "2018-08-16 15:24:20,293 INFO: 22715 -- Removing container: docker-puppet-panko", > "2018-08-16 15:24:20,355 INFO: 22715 -- Finished processing puppet configs for panko", > "2018-08-16 15:24:20,357 INFO: 22715 -- Starting configuration of haproxy using image 192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4", > "2018-08-16 15:24:20,358 INFO: 22715 -- Removing container: docker-puppet-haproxy", > "2018-08-16 15:24:20,384 INFO: 22715 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-haproxy:2018-08-14.4", > "2018-08-16 15:24:29,779 INFO: 22714 -- Removing container: docker-puppet-swift", > "2018-08-16 15:24:29,820 INFO: 22714 -- Finished processing puppet configs for swift", > "2018-08-16 15:24:29,821 INFO: 22714 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp13/openstack-cron:2018-08-14.4", > "2018-08-16 15:24:29,821 INFO: 22714 -- Removing container: docker-puppet-crond", > "2018-08-16 15:24:29,846 INFO: 22714 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-cron:2018-08-14.4", > "2018-08-16 15:24:32,737 ERROR: 22715 -- Failed running docker-puppet.py for haproxy", > "2018-08-16 15:24:32,738 ERROR: 22715 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "", > "2018-08-16 15:24:32,738 ERROR: 22715 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron,haproxy_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,haproxy_config'", > "+ origin_of_time=/var/lib/config-data/haproxy.origin_of_time", > "+ touch /var/lib/config-data/haproxy.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=controller-2", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,haproxy_config /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: Unknown variable: 'haproxy_member_options_real'. at /etc/puppet/modules/tripleo/manifests/haproxy.pp:1082:34", > "Error: Evaluation Error: Error while evaluating a Function Call, union(): Every parameter must be an array at /etc/puppet/modules/tripleo/manifests/haproxy.pp:1082:28 on node controller-2.localdomain", > "+ rc=1", > "+ set -e", > "+ '[' 1 -ne 2 -a 1 -ne 0 ']'", > "+ exit 1", > "2018-08-16 15:24:32,738 INFO: 22715 -- Finished processing puppet configs for haproxy", > "2018-08-16 15:24:32,738 INFO: 22715 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-08-14.4", > "2018-08-16 15:24:32,739 INFO: 22715 -- Removing container: docker-puppet-ceilometer", > "2018-08-16 15:24:32,764 INFO: 22715 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-ceilometer-central:2018-08-14.4", > "2018-08-16 15:24:38,399 INFO: 22714 -- Removing container: docker-puppet-crond", > "2018-08-16 15:24:38,447 INFO: 22714 -- Finished processing puppet configs for crond", > "2018-08-16 15:24:38,447 INFO: 22714 -- Starting configuration of rabbitmq using image 192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4", > "2018-08-16 15:24:38,448 INFO: 22714 -- Removing container: docker-puppet-rabbitmq", > "2018-08-16 15:24:38,472 INFO: 22714 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-rabbitmq:2018-08-14.4", > "2018-08-16 15:24:44,598 INFO: 22715 -- Removing container: docker-puppet-ceilometer", > "2018-08-16 15:24:44,632 INFO: 22715 -- Finished processing puppet configs for ceilometer", > "2018-08-16 15:24:44,633 INFO: 22715 -- Starting configuration of horizon using image 192.168.24.1:8787/rhosp13/openstack-horizon:2018-08-14.4", > "2018-08-16 15:24:44,633 INFO: 22715 -- Removing container: docker-puppet-horizon", > "2018-08-16 15:24:44,660 INFO: 22715 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-horizon:2018-08-14.4", > "2018-08-16 15:24:47,069 INFO: 22713 -- Removing container: docker-puppet-cinder", > "2018-08-16 15:24:47,134 INFO: 22713 -- Finished processing puppet configs for cinder", > "2018-08-16 15:24:57,101 INFO: 22714 -- Removing container: docker-puppet-rabbitmq", > "2018-08-16 15:24:57,157 INFO: 22714 -- Finished processing puppet configs for rabbitmq", > "2018-08-16 15:24:57,157 INFO: 22714 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4", > "2018-08-16 15:24:57,158 INFO: 22714 -- Removing container: docker-puppet-neutron", > "2018-08-16 15:24:57,182 INFO: 22714 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-neutron-server:2018-08-14.4", > "2018-08-16 15:25:02,417 INFO: 22715 -- Removing container: docker-puppet-horizon", > "2018-08-16 15:25:02,472 INFO: 22715 -- Finished processing puppet configs for horizon", > "2018-08-16 15:25:02,472 INFO: 22715 -- Starting configuration of heat_api_cfn using image 192.168.24.1:8787/rhosp13/openstack-heat-api-cfn:2018-08-14.4", > "2018-08-16 15:25:02,474 INFO: 22715 -- Removing container: docker-puppet-heat_api_cfn", > "2018-08-16 15:25:02,505 INFO: 22715 -- Pulling image: 192.168.24.1:8787/rhosp13/openstack-heat-api-cfn:2018-08-14.4", > "2018-08-16 15:25:16,654 INFO: 22714 -- Removing container: docker-puppet-neutron", > "2018-08-16 15:25:16,702 INFO: 22714 -- Finished processing puppet configs for neutron", > "2018-08-16 15:25:18,155 INFO: 22715 -- Removing container: docker-puppet-heat_api_cfn", > "2018-08-16 15:25:18,208 INFO: 22715 -- Finished processing puppet configs for heat_api_cfn", > "2018-08-16 15:25:18,209 ERROR: 22712 -- ERROR configuring haproxy" > ] > } > to retry, use: --limit @/var/lib/heat-config/heat-config-ansible/409ebc22-e3ac-4921-96c6-2d19078e43fa_playbook.retry > > PLAY RECAP ********************************************************************* > localhost : ok=25 changed=12 unreachable=0 failed=1 > > deploy_stderr: | >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1618983
: 1476863 |
1476866